ActiveResponse.org

Re-Imagining Cyber Security

Enabling Skype Two Factor Authentication

Skype is an important communications medium.  This importance also lends itself to targeting by adversaries.  Two factor authentication is one of the most important defensive actions you can take to prevent successful attacks.  Therefore, you should enable two-factor authentication for Skype!

However, this is not straightforward.  Here’s how you do it.  THE THIRD STEP IS THE MOST IMPORTANT – otherwise an adversary can bypass two-factor authentication by logging in via the Skype name still.

  1. Link Skype and Microsoft Account
  2. Enable two-factor authentication for the associated Microsoft account
  3. Disable login via Skype username (via this sign-in preference panel)

State-Associated Hackers Target Me – Here’s What To Do When They Come After You

You were just told state-associated hackers attacked your account.  Congratulations!  You’ve joined a select club and your work has received recognition.  But, probably not of the variety for which you were hoping.  According to Google, less than 0.1% of accounts unlock this achievement.  But, what do you do now?!!!

I’ve attracted the same attention over my career studying and countering targeted threats along with my work at the Global Emancipation Network disrupting human trafficking on the internet simultaneously pissing off international organized criminal groups.  I’ve also notified and counseled countless victims of state-associated activity.

I’ll tell you what I’ve done in this situation.  Hopefully you can apply some of the lessons in your own situation.  However, this isn’t going to work for everyone which will need to take their own mission and specific situation into account.  Further, I will try to dispel some myths and FUD which surrounds detection of state-associated attacks

Don’t Panic

I can’t understate this – DON’T PANIC.  You must get over your initial feelings of unease, disgust, anger, and possible fear.  Panic makes for bad decision-making, and that’s exactly what NOT to do in this situation.  Here is what you need to understand to help you make good decisions…

Myth 1: There is nothing you can do and you’re on a kill list

Reality: While there have been cases of activists targeted through their digital accounts which likely led to their death, this is extremely rare.  And, you will already know if this is a likelihood in your specific case.  Usually, they’re just interested in the following three elements:

  • Intelligence about you and your business dealings
  • Access to your contacts (to know them and further use your account to compromise them)
  • Access to your organization(s) (to use your access to compromise enterprises)

You are not without recourse and effective defense.  Those who claim that state-associated adversaries will always win are wrong.  Yes, they pose a difficult problem, but any hyperbole beyond that is incorrect.

Myth 2: If they wanted access to your account(s) and computers, they’d already have them.

Reality: Huh?  That makes no logical sense.  If they already had access how did they get access in the first place?  This is just defeatism and has not grounding in reality.  The truth is that adversaries attack not just once but MANY times.  It’s likely that if you’re a target you will remain a target.  Further, it’s likely that you are target for more than one adversary.  Also, let’s say you were previously compromised.  The adversary may have lost their access (due to either action/inaction on their part or yours) and needs to regain access.  So, DON’T YOU DARE GIVE UP NOW.  You have been given a new opportunity to defend yourself.  TAKE IT.

Myth 3: You were specifically targeted

Reality: Yes – you made it on a list.  But, you are probably one of THOUSANDS targeted in a single campaign.  Probably one of TENS of THOUSANDS targeted by that adversary this year.  It’s very likely that if the adversary doesn’t succeed you will remain on their target list for a long time and receive many attacks over time.  You will now need to remian constantly vigilant. But, it remains that you were probably not singled out.

Myth 4: You should have already taken action

Reality: Yes, taking defensive action before an attack is worth much more than action take afterwards.  However, that assumes a world that doesn’t exist.  None of us, not even the greatest security researcher in the world, takes perfect precautions.  However, assuming they were successful it doesn’t mean we give up and cede the battlefield – we fight back!  We retake what is ours.

Myth 5: State hackers can’t be detected and never attack where they will be detected

Reality: Hackers are driven by motivation to succeed in their mission.  If they don’t succeed they don’t get paid, don’t get promoted, maybe see a firing squad.  They will ultimately attack a victim via whatever method will work.  Effectiveness will almost always outweigh potential detection.  Now, I caveat with saying each adversary and their operation will contain a different risk model and so this won’t hold for EVERY operation, but for most.  So, yes, they will attack you via LinkedIn messages, Gmail, your organization’s email account, Facebook messenger, etc.  They will sometimes be caught.  How do I know?  Because they’re caught all the time.

Assess the Situation

You alone will be able to assess the situation.  What is your business or role?  Will your life be in danger?  What information is possibly compromised?  Do you know if the adversary was successful or was it just an attempt?

Use all the intelligence you have about yourself and the adversary to understand the right actions to take.

Inform Your Community

Tell others!  Tell your community!  The adversary likely targeted not just you but many others within your community, enterprise/organization, or area of interest.  By informing others you empower them with situational awareness for them to learn from you and hopefully strengthen their own defense.

Note: Those either victim-blame or otherwise talk down about those publicizing their attacks are working against the community defense and helping the adversary by stigmatizing attacks.  Our community should actively call this out and STOP IT.

Myth 6: Telling others informs the adversary you know

Reality: First, you need to understand that the adversary is not scared of you and their hubris likely keeps them from recognizing you as a capable actor.  If they find out you know they will discuss that while laughing over beer.  But probably nothing else will happen.  You will need to make a personal intelligence/gain loss decision here based on your own assessment.

However, DON’T USE A COMPROMISED COMMUNICATIONS CHANNEL TO DISCUSS THE ADVERSARY!  Use another channel.  This will likely reduce any risk.

Get Help & Defend Yourself

  1. If you don’t know what to do, or even if you think you do, get some help and advice.  Some places to turn are ProPublica and CitizenLab.
  2. Turn on 2 factor authentication – ON EVERY ACCOUNT POSSIBLE.  Hardware-based 2 factor (e.g, Yubikey) is nice and the best choice, but any is better than nothing.
  3. Reset passwords.  If possible, from a computer you don’t normally use.
  4. Set alternative notification method. If possible, set up an alert when an account receives a successful or unsuccessful login attempt)
  5. Check email accounts for any forwarding rules.  Some adversaries create forwarding rules once they gain access to the account to persist access.
  6. Check account logs.  Some providers allow you to examine account access logs.  Don’t just look at the location but also the time and method of access.  This can give you an indication if compromise was successful.
  7. Rebuild computers you use on a regular basis.  This includes phones.
  8. Increase use of encrypted communications (PGP, Signal, etc.)
  9. Start encrypting your data at rest (when stored on a drive) to prevent value being extracted after exfiltration

IMPORTANT: If you suspect successful compromise, don’t use the compromised machine.  Get help to investigate and remediate.  Unplug or turn off the machine.  Otherwise, just rebuild from a clean source.

Remain Vigilant

Once targeted, the adversary will likely target you again in the future.  Periodically conduct the defensive steps above and remember those below:

  • Be careful of those who request access to your social network, such as LinkedIn connect requests
  • It’s okay to suspect items from colleagues or illustrious organizations and ask for verification via another channel
  • Keep your applications and OS up to date with patches
  • Browse sites wisely
  • Rebuild regularly both phones and computers
  • Don’t enable macros

2016 Targeted Threats in Review – What We’ve Learned

In 2016 we’ve seen some amazing targeted threat events – election disruption, major financial fraud, more zero-days, new groups uncovered, etc.  Information security tends towards “firefighting” mode; always concerned with the latest or newest.  However, too easily we fall into “recency bias” where we falsely give most recent events the greatest weight in our mental model.  So, to break that – I take time each year to look back over to the threats we saw earlier to correct my perspective.  The most interesting items I find are those which challenge and break assumptions causing me to reevaluate my models.  They’re the events which surprise and impress seasoned analysts.

This year I saw items fall into three categories:

  • Victimology & Intent – While targeted threats continue to conduct traditional espionage activities against both government and industry, both their victimology and intent are shifting slightly to better align with a changing motivation.
  • Capabilities – Adversaries continue to value effective over all else to support their operations (this year macros and increasing evasion techniques)
  • Duller Objects” – I call them the “dull objects” because they’re not shiny and don’t usually get much attention.  Yet, they can pose as much a threat as any shiny object and act as bellwethers.

Victimology & Intent

US Industry Victims

bluehat_shifting_victimsIn September 2015, the US and China agreed that neither would “conduct or knowingly support theft of intellectual property.” Fireeye reported in “Redline Drawn” that there has been a significant draw down of US-targeted operations since at least 2014.  But, while there has been a decline, they could not solely attribute it to the agreement.  Similarly, Symantec reported that in 2016 a notable threat group previously operating heavily against US targets, Buckeye, ceased US operations and shifted strongly towards Hong Kong.

What we learned: While no public smoking gun exists that the US-China agreement had a direct effect on US victims, there is a clear shift in operations according to Fireeye and Symantec.  This illustrates that even after the long-standing “China hacking US” narrative driving a majority of targeted threat protection investment – things can shift.

Swift Attacks

swift_attack_slideIn February 2016 an adversary leveraged access into the Bangladesh Central Bank to defraud other banks of $81M – attempting fraudulent transactions over $1B.  Symantec correlated the attack to the Lazarus group based in the DPRK via malware analysis and similarity to that used to attack Sony Pictures.  SWIFT is the backbone of the global financial network, supporting over 6 billion messages/year between over 200 countries and territories.  However, thanks to a small typo by the adversary in the name of the non-profit supposedly benefiting from the transactions, an analyst caught the error and found the fraud allowing SWIFT and the banks to unravel the operation.  Later, analysts discovered further fraudulent activity possibly implying that either more than one group is active in the network or the Lazarus group had deeper or wider access than initially thought.

What we learned: If we trust Symantec’s analysis and attribution to the Lazarus Group, it presents a significant event using state-enabled cyber actors to conduct global financial fraud.  This could cause other financially struggling states to look to their cyber capabilities to enable financial fraud and increases the scope of victimology for some groups which fit this profile.

Summer of STRONTIUM

strontium_slide

Probably the biggest security story this year – and by far the biggest targeted threat ever in the public dialog: hacks into the Democratic National Committee (DNC), Democratic Congressional Campaign Commission (DCCC), World Anti-Doping Agency (WADA), and many others during a tumultuous year involving the US presidential election and the 2016 Olympics in Rio de Janeiro.

Traditionally, targeted threat activities like STRONTIUM exploit victims to gather intelligence and use it quietly avoiding all attention.  They rarely use the intelligence publicly.  In this case, they posted (at least some of) the intelligence publicly in large dumps.  However, this is not the first example of such activity.  Previous examples of this type of activity include dumping Sony Pictures emails and the Syrian Electronic Army (SEA) dumping Forbes’ databases.

This, alongside other activity including intrusions into state election offices made headlines and potentially affected US elections.

What we learned: STRONTIUM reinforced that the value of intelligence is not what you collect, but how you use it.  Like the Sony email dump, this likely points to a trend over time of dumping intelligence and not hording it.  Especially in cases where the greatest harm will come ultimately from disruption.  This will further reinforce the relationship between cyber activities and traditional information operations.  Read more about STRONTIUM.

Capabilities

Macros – Macros Everywhere

macros_slide

Relegated by many to the past – macros came back this year with a vengeance.  Reported by both Microsoft and Trend Micro, macro threats rose significantly during 2016.

This was not limited to commodity threats (such as ransomware) but also adopted by targeted threats as well.

Luckily, Office 2016 included new techniques for administrators and defenders to block macro attacks.

What we learned: I always teach my students, “what was old will be new again” in the threat landscape.  It is critical that professionals study the threats of old because they, or some variant, will likely return.  However, often our hubris tends to dismiss the threat because they’re not as “cool” as the newer techniques.  But, as we know – adversaries care less about newer techniques then they care about effectiveness.

Evasion

evasion_slideAnti-forensics and sandbox evasion are not new.  However, as the security industry has matured and caused the life-cycle of many threats to shorten – malware authors are increasingly using sandbox evasion throughout their entire kill chain.  Because network defense has had to increasingly rely on automated techniques (such as dynamic malware detonation) to protect networks because of the failure of static signatures – malicious capabilities naturally respond to the defensive environment to disrupt automated techniques.  Lastly, security researchers are increasing effective at discovering and uncovering operations and pose an equal harm to malicious activity.

For targeted threats, this includes not only the long-term Trojan or backdoor, but the entire operations chain through the first stage.  It is now common to encounter JavaScript reconnaissance profiling the victim with some back-end logic determining whether they’re “worthy” of receiving the dropper or later stages.  This victim profiling include the locale of the host, domain affiliation, hostname, etc.  Further techniques include just-in-time decryption and modularization to prevent the full capability from being captured.

What we learned: This reinforces our understanding that adversaries will always react to the security environment.  Elements which harm their operations, such as security researchers or automated analytics, will always be countered over time.

The “Duller Objects”

Platinum

platinum_slide

PLATINUM is a fascinating activity.  Their operations are like a text-book for targeted threat: zero-day usage, long-term persistence, novel DLL injection, time-limited operations to prevent discovery, anti-forensics techniques, political targeting, etc.

What we learned: Not only did we learn some fascinating new techniques about targeted threats to bolster our defenses, PLATINUM also taught us that a threat operating since at least 2009 can still be newly discovered in 2016.  This means that there are likely many more targeted threats operating than cataloged by threat intelligence analysts.

Targeted Threat As A Service

paytoplay_slide What if you’re an actor with little-to-no native capability?  Yet, you need intelligence collected from digital targets?  You go and buy it.  2016 saw some of the most advanced targeted threats as a service emerge.  Citizen Lab uncovered and documented an amazing history of targeting and exploitation against a human rights activists in the Middle East.  Particularly, this exploitation caused one of the largest security patches to Apple’s iOS using three separate vulnerabilities.  The group responsible is NSO Group from Israel.

Of course, this is not new – we’ve previously known about FinFisher and HackingTeam.  Their commercial hacking services target a range of victims.  Their hording of zero-days is also a target for others and a vulnerability for all – as seen in the HackingTeam dump.

What we learned: It is amazing how little coverage of these threats exist – yet the companies have public facing profiles and usually disclose their activities on their website.  More concerning, lesser capable states tend to be those with the worst human rights records and so use this commercial technology to gather intelligence on activists and others.  If we care about protecting the most vulnerable using our capabilities, these services would receive greater attention by the community.

Building Threat Hunting Strategies with the Diamond Model

Hunting cyber threats (especially those never seen previously) is the most expensive and difficult threat intelligence endeavor.  Hunting is a risk because you’re betting that there is something there to find – and that you can find it.  An effective hunter may come up empty most of the time.  Creating an effective threat hunting strategy ensures greater chances for a return on the investment.

An effective strategy includes answering four critical questions and employing the right approach to achieve the goal.  The Diamond Model identifies several “centered-approaches” enabling effective threat hunting.  Tying these approaches together creates the basis for a hunting strategy.  Without a strategy your chances of failure increase dramatically.

Hunting cyber threats is the most expensive and difficult threat intelligence endeavor.

Building a Hunting Strategy with the 4 Hunting Questions

Throwing out “I’m going threat hunting” is akin to saying, “I’m going fishing.”  Both are such vague phrases that they generally require a follow-up question: “For what?”  Some may answer “malware” or “lateral movement” the same as others answer “salmon” or “bass.”  The next question asked, naturally, “where?”  This leads us to the first critical element of a hunting strategy: answering the critical questions.

If you can’t answer these questions well.  You might as well go back to what you were doing because you’ll likely end up just wasting time and resources.  Hunting requires patience and discipline.  These four questions are the core of any hunting strategy.

The 4 Hunting Questions

There are four critical questions necessary to build a hunting strategy, and they’re best answered in this order:

  1. What are you hunting?
    • Hunting is expensive and risky.  You must narrow down exactly for which activity you are hunting.  Is it exploitation?  Is it lateral movement?  It is exfiltration?
  2. Where will you find it?
    • What you are hunting will determine where you will find the activity.  You must next narrow down the sources of telemetry which will provide visibility into the activity AND access to stored telemetry
  3. How will you find it?
    • Once you’ve identified what you’re looking and where you’ll likely find it, next you must identify the tools to hunt.  You don’t catch salmon and bass in the same way – you won’t catch exploitation and lateral movement in the same way.
  4. When will you find it?
    • Have a time bound for your hunting.  A never-ending chase will lead you nowhere.  Allot a specific amount of time necessary to achieve your goal, and if you come up empty at that time – move on to the next target.  If you have to feed your family, and you go out salmon fishing but catch nothing – it’s probably best to instead go after another fish or game before everyone dies of starvation 🙂  Likewise, management may likely lose patience with your hunting if you don’t deliver value.

From Strategy to Approach

Once you’ve answered the four critical hunting questions – you must then design the approach.  The approach not only describes the modes and methods of your hunting but, more importantly, addresses the “why.”  The “why” establishes your hypothesis.

Hunters must build and test many hypotheses at once.  Each failed hypothesis can lead to a failed hunt.  For instance, the hunter hypothesizes that they’re breached.  Why else would they be hunting?  Of course, if they’re not – the hunt fails.  The hunter hypothesizes the adversary leverages identities to move across assets.  So, this hypothesis leads the hunter to examine Active Directory logs.  Of course, if the adversary uses file shares they may not show up in AD – the hunt fails.

This step is critical because hunting is a big risk and cost.  And, establishing not just the “how” but also the “why” will help hunters critically examine their approach and look for other methods possibly overlooked.

When hunting adversaries you must always question your approach and look for more creative and effective methods.

The Diamond Model Centered Approaches

The Diamond Model establishes the event as the most basic element of any malicious activity and composed of four core features: the adversary, the victim, infrastructure, and capability.  All malicious activity contains these features (as established in Axiom 1).  Therefore, any hunting is ultimately based on these features and any hunting approach contains a mix of these “centered approaches.”

However, don’t consider these approaches in isolation.  Instead, a mix of approaches used in concert achieve greater coverage.

The Diamond Model of Intrusion Analysis. An event is shown illustrating the core features of every malicious activity: adversary, victim, capability, and infrastructure. The features are connected based on their underlying relationship.

The Diamond Model of Intrusion Analysis. An event is shown illustrating the core features of every malicious activity: adversary, victim, capability, and infrastructure. The features are connected based on their underlying relationship.

Named for the feature on which they’re based, the approaches are:

The Victim-Centered Approach

The news of several determined adversaries targeting a single human rights activist is an excellent example of the victim-centered approach.  A victim-centric approach uses the victim as the central element for hunting and looks to illuminate the other Diamond-connected features (i.e., capabilities, infrastructure, adversaries).  The victim-centric hunt is equivalent to a “honeypot.”

Network defenders will most likely focus on the victim-centered approach.  It provides the greatest benefit and easiest approach with the highest likelihood of actionable results.  There are many modes and methods provided by this approach.  Chris Gerritz (@gerritzc) details several victim-centered approach modes and methods in his post: Approaches to Threat Hunting.

Advantages: catches many adversaries, many hunting opportunities (e.g., network attacks, malicious email delivery, etc.), easily obtained data (usually)

Disadvantages: possible overwhelming amount of malicious activity, too many hunting opportunities can dilute an undisciplined hunting effort

Tips: focus hunt on a single phase of the kill-chain at a time

See Diamond Model Section 7.1.1

An Example Victim-Centered Hunting Strategy

[Why] We hypothesize that several adversaries target a specific victim.

[Why] We further hypothesize that adversaries deliver their capabilities via email (as most do).

[Why] Our hypothesis is strengthened through data that most attacks are delivered via email and our organization has previously received email-borne delivery attacks.

[What] Our hunting goal: collect intelligence on adversary attacks in the email delivery phase.

[Where & How] Therefore, our victim-centered hunting approach includes gaining visibility into the victim email and apply tools which illuminate likely malicious elements (links, attachments).  Our primary method will involve detonating attachments and hyperlinks.  Our secondary method will involve sender-receiver graph analysis and header inconsistencies.

[When] We will apply this approach and methodology for 2 weeks after achieving access to data.

This hunting strategy reveals:

  • Capabilities: the tools and techniques used by an adversary to compromise and operate against a victim (e.g., in our example: the malicious attachments)
  • Infrastructure: the logical and physical elements necessary to manage capabilities (e.g., in our example: the email source, malicious attachment C2, URLs)

The Infrastructure-Centered Approach

While network defenders will generally take the victim-centered approach.  That’s not the only hunting approach available.  The infrastructure-centered approach enables hunters to identify malicious infrastructure and possibly pivot to identify capabilities, victims, and more infrastructure.  Most importantly, because generally infrastructure must operational before capabilities and victims connect – new infrastructure can provide preemptive defense.

There are several methods to leverage this approach.  Depending on access and visibility some are easier than others.  For instance, one method is to monitor domain name servers known to host malicious domains.  Another may be to monitor all new domain registrations for a known pattern used by an adversary.

Another popular method is SSL certificate chaining.  PassiveTotal has written a post, “Harnessing SSL Certificates Using Infrastructure Chaining” detailing the method.  Mark Parsons (@markpars0ns) has a great presentation on “Hunting Threat Actors with TLS Certificates.”

Lastly, and one of the most difficult is direct observation of malicious infrastructure.  This could be done through a service provider – or via infrastructure take-over (such as a sinkhole).  Through this method, significant intelligence can be gained including: capabilities used through the infrastructure, victims contacting the infrastructure, and potentially other related infrastructure.

Don’t forget about the opportunities to use the Diamond Model to chain multiple approaches together.  For example, after discovering new infrastructure an analyst is able to pivot an ask for additional information about Diamond-connected features, such as capabilities.  This might be through pivoting across a malware zoo like Virus Total for any reference to the infrastructure.

Advantages: Good tools exist to support the approach (PassiveTotal), finding infrastructure prior to operational use provides preemptive defense

Disadvantages: Limited data access, findings not relevant to many organizations

Tips: Data, Data, More Data

See more in the Diamond Model Section 7.1.3

Example Infrastructure-Centered Hunting Strategy

[Why] We hypothesize adversaries establish infrastructure prior to operations

[Why] We hypothesize adversary X continues to structure their domains using the pattern badstuff-<victimname>.com

[Why] We hypothesize adversary X continues to use the name server baddomains.com to host their infrastructure and the same

[What] Our hunting goal: monitoring the name server for new names matching the pattern we may find new names prior to their operations providing proactive defense.  Further, because the adversary uses the victim name in their domain we will likely identify victims.

[Where] The baddomains.com name server

[How] Monitor the baddomains.com name server by querying the server every morning for all domains and identifying the domains not seen the previous day.  Further, looking for any domains on the server with the known pattern.

[When] We will leverage this strategy for 1 month to provide for any dips in adversary activity during that period

The Capability-Centered Approach

Aside from the victim-centered approach employed by most network defenders, the capability-centered approach is the second-most popular.  This is largely due to the broad accessibility of a massive malware zoo – VirusTotal.  If VirusTotal didn’t exist, this approach would likely be limited to only anti-virus vendors and others with potentially large collections of malicious binaries.

The capability-centered approach focuses on discovering intelligence from adversary tools – namely “malware” (but the category is larger than malware and includes legitimate tools used illegitimately).  The most advanced hunters using this approach take advantage of the VirusTotal retrohunt feature enabling analysts to run YARA rules over the VirusTotal zoo looking for lesser known samples.

Advantages: easy access to large malware library (VirusTotal), easily written analytics (YARA)

Disadvantages: without your own malware zoo – limited to VirusTotal features

Tips: take advantage of VirusTotal

See more in the Diamond Model Section 7.1.2

Example Capability-Centered Hunting Strategy

[Why] We hypothesize that network defenders share adversary capabilities via VirusTotal

[Why] We hypothesize that we can identify unique malware via a malware zoo using static analysis

[What] Our hunting goal: find undiscovered malware and its associated command and control (C2) channel to feed host- and network-based detection to enhance protection

[Where] VirusTotal

[How] Author and execute YARA rules over the VirusTotal data and monitor the zoo daily for new samples meeting our criteria

[When] We will author and improve rules for 2 weeks and run them perpetually

The Adversary-Centered Approach

The adversary-centered approach focuses on visibility on the adversary themselves – meaning few organizations have the requisite visibility.  Usually, limited to only service providers and those with extraordinary capabilities.  However, achieving visibility directly on the adversary themselves generally provides tremendous, almost perfect, insight.  This includes infrastructure creation and management, capabilities (sometimes those in development), attribution details, and at times victim information.

However, others may access some methods within this approach.  For instance, knowing an adversary persona may allow an analyst to leverage open source intelligence (OSINT) to track the persona across sites potentially gaining insight into operations.  Further, an analyst may leverage adversary operations security (OPSEC) mistakes to achieve attribution based on their persona.  ThreatConnect’s CameraShy work illustrates the adversary-centered approach to achieve attribution through persona development and tracking.

However, while this approach leads to “newsworthy” items regarding attribution – their direct application to network defense is limited.  Therefore, generally only those with a vested interested in attribution leverage this approach.

Advantages: possible adversary attribution, deeper visibility into

Disadvantages: the most difficult approach requiring significant (and sometimes extraordinary) visibility or adversary mistakes, does not generally result in actionable intelligence, adversary “false flag” mis-attribution may trip up undisciplined analysts

Tips: leverage OSINT and pray for a mistake 🙂

See more in the Diamond Model Section 7.1.4

Example Adversary-Centered Hunting Strategy

[Why] We hypothesize adversaries use personas to register malicious domain names

[Why] We hypothesize that some of these domain registration aliases relate to real people

[Why] We hypothesize that adversaries have mistakenly tied their operational alias to their real personas revealing their personal details

[What] Our hunting goal: uncover the real people behind malicious domains providing attribution

[Where] Domain registration data and other open sources

[How] Take 500 known malicious domains, extract aliases details from their registration, pivot across open sources looking for correlation with real people

[When] Spend 3 days collating known malicious domains, 2 weeks pivoting across open sources

Why Threat Intelligence Sharing is Not Working: Towards An Incentive-Based Model

The juggernaut known as the “threat intelligence sharing imperative.”  Security and industry conferences fill their time with “sharing.”  How many sharing groups and platforms do we require?  Too many exist.  Alien Vault recently reported that 76% of survey respondents reported a “moral obligation to share threat intelligence.”  McAfee says sharing threat intelligence “is the only way we win” (that isn’t even remotely true).  However, it’s not working.

According to Robert Lemos in eWeek, even with the most recent US cyber security legislation providing legal immunity organizations are not rushing to share.  The reason is simple.  That was only one component of a complicated problem.  While the legislation addressed one policy element, it didn’t address that sharing has never been proven (with data) to benefit sharing organizations.

We must move beyond these “religious” arguments and provide clear incentives for defenders to share.

In January, President Obama signed the Cybersecurity Act of 2015, but companies remain in a holding pattern, waiting for legal clarity and demonstrable benefits before sharing sensitive information.

– Robert Lemos, eWeek “Cyber-Threat Data Sharing Off to Slow Start Despite U.S. Legislation” [2016-10-02]

The Loudest in the Room

There is one thing I notice – security vendors yell the loudest about sharing. I don’t claim their sharing narrative is FUD, but the sharing narrative is a net positive for them.  The more data and intelligence they receive strengthen their products and services adding value to their organization. Security vendors have strong incentives to promote threat intelligence sharing.  But, what is the case that the cost of sharing to defenders is a net benefit to them?

Security vendors have strong incentives to promote threat intelligence sharing.  But, what is the case that the cost of sharing to defenders is a net benefit to them?

Sharing is Costly

I’ve been involved in threat intelligence sharing for a long time.  I am the first to support the notion of sharing.  I have story up on story which supports the sharing narrative.  But, I qualify my support: the value of sharing must exceed the cost.

Most network defenders will agree: sharing is costly.

  1. It requires significant cost to integrate externally shared threat intelligence effectively.
  2. Once you consume that threat intelligence you quickly discover it may consume your security team with poor quality – and requires significant tuning.  There is risk.
  3. Establishing a sharing mechanism, program, and process is costly.  It usually requires engineering effort.
  4. Management support for sharing usually requires political capital from network defense leaders.  They must prove that the resources spent on sharing are more important than the 20 other components competing for resources.  Also, let’s not forget about the legal support.

An Incentive-Based Approach

Sharing must go beyond a “religious” argument.  Instead, we must take an incentive-based approach.  We must create and promote incentives for defenders to share – with demonstrable results.  Therefore, those promoting sharing must provide a coherent and consistent data-driven case that sharing overcomes these costs to defending organizations.  “Share because it is good for you” is not enough.

So, next time you advocate for sharing – enumerate why network defenders should share.  Make it meaningful.  Make it data-driven.

4 Qualities of Defensible Products – Secure Products Are Not Enough

For decades the industry worked to build secure products: products which can withstand attacks usually by reducing vulnerabilities and exposures.

However, what happens when that fails and an attack is successful in spite of the work done to secure the product?  I propose that we require both secure products AND defensible products; products which not only resist attacks but successfully defended when attacks bypass protection.

4 Qualities of Defensible Products

  1. Visibility – the visibility necessary to detect unauthorized use and malicious attacks
  2. Transparency –  the transparency into the product’s operations to conduct a proper investigation and response after detection
  3. Controls –  the controls necessary to remediate a threat after detection and investigation
  4. Resilience – a product returns to an working state quickly after remediation (or remain operational during an attack)

Indicators and Security Analytics: Their Place in Detection and Response

Indicators for research and response; analytics for detection

Indicators of Compromise (IOCs), the lingua franca of threat intelligence.  Almost every intel sharing conversation begins and ends with indicators; commercial intelligence platforms revolve around them; most intelligence consumers end their interest there.  Does a better way exist?  Security analytics!

The Problem with Indicators in Detection

For all the focus given to indicators we know that they have the shortest lifespan of all intelligence exhaust (see the Pyramid of Pain by David J. Bianco).  In many cases, we see single use or victim specific indicators making sharing of these useless.  In general, adversaries tend towards shortening the indicator lifespan – or removing them; for instance Locky recently transitioned to hardcoded RSA keys to remove the vulnerability of connecting to a command and control (C2) server.

Broad based indicator sharing is fraught with problems.  First, it assumes that the same indicators will be leveraged against multiple victims.  This is certainly the case for some threats.  But not all.  Second, quality will likely be a problem.  For instance, DHS Automated Indicator Sharing (AIS) states:

Indicators are not validated by DHS as the emphasis is on velocity and volume: our partners tell us they will vet the indicators they receive through AIS, so the Department’s goal is to share as many indicators as possible as quickly as possible. However, when the government has useful information about an indicator, we will assign a reputation score.   – DHS Automated Information Sharing

Further, AIS contributors can choose to remain anonymous.  Think about the problems of blindly consuming thousands of non-validated anonymously sourced indicators.  How exactly do you effectively validate an anonymously contributed indicator?  Previously, I wrote on the cost of poor intelligence.  Just one instance of 8.8.8.8 by an anonymous contributor could cause massive issues.

Indicators of Compromise are only threat exhaust –  the necessary by-product of malicious activity.  Short-lived and increasingly single use, indicators pose a poor basis for detection – and it’s getting worse.  I’m not advocating for throwing indicators out entirely – they serve their purpose, but should not form the entire basis of threat intelligence detection.

Analytics For Detection

As the Pyramid of Pain suggests, we must move towards behavioral based detection focusing on whole classes of threats.  I’d much rather rely on an analytic detecting overwriting Windows registry keys for a “sticky keys” attack than hoping someone shares an IP address of a random hop point used before to remote desktop (RDP) into a host.  In the analytic case I catch every adversary using sticky keys, in the case of the indicator I catch only one adversary – with the hope they use the same infrastructure again.

Where do you find analytics?

  • The best place is your red team – ask them to describe their techniques and procedures.  Read their reports!  (I know – a stretch for some)
  • Read threat intelligence reports on adversary behaviors.
  • Ask your threat intelligence provider!  (Who you already abuse with information requests anyways – right?)
  • Check out MITRE’s Cyber Analtyics Repository.

The Place for Indicators – Research and Response

Indicator sharing works within a small group of organizations that share a “victim space” (as the Diamond Model refers to victims with shared threats).  This greatly increases the value of shared indicators because the likelihood of attackers reusing indicators increases.  However, indicator sharing outside the “shared victim space” reduces their value and increases their cost.  Research and response receive the greatest value from shared indicators as it allows a method of communicating observables discovered in attacks allowing analysts to pivot deeper into malicious activity seen by others.

Your Own Intelligence is the Best

In the end, to achieve greater detection capability organizations must invest in security analytics and reduce their reliance (and insistence) on indicators from externals.  The best indicators in the world are those from your organization’s own telemetry – your own threat intelligence is the most relevant.  Otherwise, look suspiciously at indicators from others and instead ask to share analytics!

Note: Security analytics are a dirty word – overused and often misused.  To be clear, I define analytics in this post as indicator-independent behavioral detection derived from the knowledge of bad stuff (i.e. Threat Intelligence)

The Laws of Cyber Threat: Diamond Model Axioms

Many confuse the purpose of the Diamond Model.  Most believe the Diamond Model exists for analysts, but that is an ancillary benefit.  Instead, think of the Diamond Model like a model airplane used to study the principles of aerodynamics.  It is not an exact copy but rather a good approximation of the full-scale airplane being studied.  The model exposes elements to test and study in a controlled environment improving the performance of the plane in an operational environment.  The Diamond Model does the same, except for cyber threat analysis.

When describing the Diamond Model to others, I usually start with, “we didn’t create the Diamond Model, we simply expressed some fundamental elements which always existed.”  Surprisingly, I learned while writing the Diamond Model how exposing this fundamental nature improved cyber threat intelligence.

The Diamond Model captures this fundamental nature about threats in seven axioms and one corollary.  This post will highlight those axioms.

Axiom 1

For every intrusion event there exists an adversary taking a step towards an intended goal by using a capability over infrastructure against a victim to produce a result.

What it means: every malicious event contains four necessary elements: an adversary, a victim, a capability, and infrastructure.  Using this fundamental nature we can create analytic and detective strategies for finding, following, and mitigating malicious activity.


Axiom 2

There exists a set of adversaries (insiders, outsiders, individuals, groups, and organizations) which seek to compromise computer systems or networks to further their intent and satisfy their needs.

What it means: there are bad actors working to compromise computers and networks – and they do it for a reason.  Understanding the intent of an adversary helps developing analytic and detective strategies which can create more effective mitigation.  For example, if we know that an adversary is driven by financial data, maybe we should focus our efforts on assets that control and hold financial data instead of other places.


Axiom 3

Every system, and by extension every victim asset, has vulnerabilities and exposures.

What it means: vulnerabilities and exposures exist in every computer and every network.  We must assume assets can (and will) be breached – other express this notion as “assume breach.”


Axiom 4

Every malicious activity contains two or more phases which must be successfully executed in succession to achieve the desired result.

What it means: malicious activity takes place in multiple steps (at least two), and each step must be successful for the next to be successful.  One popular implementation of this axiom is the Kill Chain.  But, the Kill Chain was not the first to express this notion – another popular phase-based expression is from the classic, Hacking Exposed.


Axiom 5

Every intrusion event requires one or more external resources to be satisfied prior to success.

What it means: adversaries don’t exist in a vacuum, they require facilities, network connectivity, access to victim, software, hardware, etc.  These resources can also be their vulnerability when exploring mitigation options.


Axiom 6

A relationship always exists between the Adversary and their Victim(s) even if distant, fleeting, or indirect.

What it means: exploitation and compromise takes time and effort – adversaries don’t do it for no reason.  An adversary targeted and compromised a victim for a reason – maybe they were vulnerable to a botnet port scan because the adversary looks to compromise resources to enlarge the botnet, maybe the victim owns very specific intellectual property of interest to the adversary’s business requirements.  There is always a reason and a purpose.


Axiom 7

There exists a sub-set of the set of adversaries which have the motivation, resources, and capabilities to sustain malicious effects for a significant length of time against one or more victims while resisting mitigation efforts. Adversary-Victim relationships in this sub-set are called persistent adversary relationships.

What it means: what we call “persistence” (such as in Advanced Persistent Threat) is really an expression of the victim-adversary relationship.  Some adversaries need long-term access and sustained operations against a set of victims to achieve their intent.  Importantly, just because an adversary is persistent against one victim doesn’t mean they will be against all victims!  There is no universal “persistent” adversary.  It depends entirely on each relationship at that time.

Corollary

There exists varying degrees of adversary persistence predicated on the fundamentals of the Adversary-Victim relationship.

What it means: not all persistence is created equal.  Some adversary-victim relationships are more persistent than others.  Sometime a victim will mitigate a years long intrusion only to be compromised again by the adversary that same week; at other times the adversary will never return.

Diamond Model or Kill Chain?

Rob MacGregor at PwC in “Diamonds or chains” asked , do you choose the Diamond Model or Kill Chain?  I get asked this question often.  The question assumes that the models are mutually exclusive when, in fact, they are not only complementary but interconnected.  Both models express fundamental elements of network exploitation in methods usable by network defenders.  You can’t expect complete intelligence or network defense without using both the Diamond Model and the Kill Chain.

Most understand that the Diamond Model expresses the first axiom encompassing the basic components of any malicious event: “For every intrusion event there exists an adversary taking a step towards an intended goal by using a capability over infrastructure against a victim to produce a result.”  However, most readers stop there, at page 15 – only 25% of the model.

Adversaries don’t just conduct one activity and move on – no, they must conduct several in a phased approach each successfully completing before the next.  As expressed on page 15 via Axiom 4: “Every malicious activity contains two or more phases which must be successfully executed in succession to achieve the desired result.” Axiom 4 effectively describes the Intrusion Kill Chain (section 3.2 of the Kill Chain).  Therefore, Events interconnect via Activity Threads which describe campaigns.

One may notice a great similarity between the figure describing key campaign indicators (Kill Chain pg. 8) and the Activity Threads illustration (Diamond Model pg. 31).  The two approaches interconnect at this point!

Diamond Model Activity Threads; The Diamond Model of Intrusion Analysis pg. 31

Diamond Model Activity Threads; The Diamond Model of Intrusion Analysis pg. 31

 

 

 

 

 

 

 

 

 

 

Dependent Events (composed of a victim, adversary, capability, victim) create Activity Threads across the Kill Chain.  These threads compose (using key campaign indicator analysis) adversary campaigns.  Ta Da!  The first interconnection between the two models.

The Diamond Model and Kill Chain analysis are highly complementary. Kill Chain analysis allows an analyst “to target and engage an adversary to create desired effects.” (Kill Chain pg. 4) The Diamond allows analysts to develop tradecraft and understanding to build and organize the knowledge necessary to execute the Kill Chain analysis.

  • Once an analyst develops an activity thread, courses of action for each event along the thread can be identified using the Kill Chain’s course of action matrix. As illustrated in the figures, courses of action for each of the Kill Chain stages are identified for activity threads. The power of the Diamond Model is that courses of action can be designed to span multiple victims and across the activity of an adversary making the actions even more powerful as they further reduce the capacity of the adversary.
  • Activity groups clustered by same likely adversary (i.e., clustering by attribution) with analysis of the largest common feature set amongst the events in a group can provide the Kill Chain’s required key campaign indicators necessary to focus and prioritize courses of actions.

In the end, don’t ask: do we use the Diamond Model or the Kill Chain. Instead ask: are you using them both effectively?

Threat Intelligence Definition: What is Old is New Again

Michael Cloppert, whom I hold in great esteem and friendship, argues for a new and unconventional definition of “cyber threat intelligence.”  His post is excellent and well-done.  His argument is simple: that the existing definitions of intelligence and cyber threat intelligence are lacking based on his professional experience of the domain and fail to capture its unique elements.   He offers several definitions:

Cyber threat operations as actions taken in cyberspace to compromise and defend protected information and capabilities available in that domain

Cyber Threat Intelligence Analysis as the analysis of those actions and the actors, tools, and techniques behind them so as to support Operations

I define the Cyber Threat Intelligence domain as the union of Cyber Threat Intelligence Operations and Analysis.

Michael Cloppert, Defining Cyber Threat Intelligence (2016)

I agree with his assessment that existing cyber threat intelligence definitions lack accuracy.  But, Mike’s definitions are too constrained by operations and lack inclusion of the key element of intelligence in any discipline: that intelligence serves to inform decision-making (whether that decision-making is of the technical/tactical nature such as in firewalls, or strategic at the executive level).  Intelligence doesn’t serve operations, intelligence serves decision-making which in turn drives operations to achieve policy outcomes.

Mike references some key CIA thought-pieces on their definitions of intelligence, namely by Martin T. Bimfort in A Definition of Intelligence. Mike is correct that taken at face value, Bimfort’s definition is too constrained with concern about national security to be of much value to cyber threat intelligence.

Intelligence is the collecting and processing of that information about foreign countries and their agents which is needed by a government for its foreign policy and for national security, the conduct of non-attributable activities abroad to facilitate the implementation of foreign policy, and the protection of both process and product, as well as persons and organizations concerned with these, against unauthorized disclosure.

Martin T. Bimfort’s definition of intelligence in A Definition of Intelligence

However, instead of taking Bimfort’s definition at face value, let’s instead look at its essence by removing the domain-specific (state-only) language.  By doing so, I arrive at the following revised definition:

Intelligence is the collecting and processing of that information about threats and their agents which is needed by an organization for its policy and for security, the conduct of non-attributable activities outside the organization’s boundaries to facilitate the implementation of policy, and the protection of both process and product, as well as persons and organizations concerned with these, against unauthorized disclosure.

This definition fits well what we do in cyber threat intelligence: we uncover the hidden threats to an organization (be it a company or country) to protect them against threats both attributable and non-attributable to enable their policy (which for a private company is to return value to shareholders), protect their operations, and prevent disclosure of secrets.

I propose that cyber threat intelligence is nothing more than the application of intelligence principles and tradecraft to information security.  Its outcome is nothing different from traditional intelligence: to inform and empower decision-making at all levels with knowledge of threats.  We don’t require a radical new definition of cyber threat intelligence, because the traditional definitions of intelligence are applicable by simply broadening them outside of their state-only constraint.

EDIT: Robert M. Lee blogged in response – “Intelligence Defined and its Impact on Cyber Threat Intelligence“.  He came to the conclusion that the definition is, “the process and product resulting from the interpretation of raw data into information that meets a requirement as it relates to the adversaries that have the intent, opportunity and capability to do harm.”

Page 1 of 5

Powered by WordPress & Theme by Anders Norén