Active Response

Always A Bad Day For Adversaries

CCleaner Malware 2nd Stage Victimology Highlights Adversary Interest in Telecommunications

Cisco Talos CCleaner Victim Domain List (source)

Recently, the CCleaner tool contained malware likely infecting millions.  Cisco’s Talos threat intelligence group analyzed some of the data around the CCleaner malware command and control (C2) to get deeper into the second stage.   Talos released a list of domains in which victims would receive a second-stage package likely for further operations.

While most names were familiar to myself the subdomains intrigued me.  The adversary was not just interested in – but particularly the JP and AM sub-domains within Sony.  Not just but  Understanding these sub-domains and their role may offer more insight into the interests of the adversary.

This list likely changed over time – and this list is only a snapshot in time, so it’s difficult to provide a complete profile of adversary interest, but it is interesting for that snapshot.

So, I went digging for about 30 minutes and here’s what I found.

Interesting Findings

  • These are not well-known subdomains.  This means the adversary identified them somehow and then assigned them greater value = to recieve the second stage.  This very strongly indicates a targeted activity rather than commodity threat.
  • This isn’t a clear-cut case of economic espionage because of the tendency towards telecommunications which can serve intelligence value well beyond intellectual property theft.
  • While I didn’t find a correlation among them all, many of these  domains were listed in leaked document/email dumps like WikiLeaks and the Panama Papers
  • Especially strong correlation between all domains being associated with electronics and technology
  • A large number (16 of 24 – 66%) of targets involved telecommunications and telephony  [Samsung (mobile handsets), Singtel (telecommunications), Sony (most don’t know about Sony Mobile), Intel (chips for mobile), Microsoft (maker of Windows Phone), Cisco (lots of telephony), O2, Vodafone, Linksys, Dlink)]
  • The odd domain is involved in gaming and gambling equipment which doesn’t have a correlation with any other victims.  This adds strength to the hypothesis that the victim list is a changing requirement set and the adversary may be satisfying a large variety of needs.
  • The three prominent geographic elements amongst the domains: Asia, Europe, and North America

Victim Domains

singtel.corp.root – internal domain related to Singapore Telecommunications Limited is a Singaporean telecommunications company.

htcgroup.corp – This domain is actually ambiguous because there are many “HTC Group” organizations and the well-known electronics manufacturer isn’t publicly referenced as “HTC Group”

samsung – Clearly a reference to the well-known electronics manufacture

samsung-breda – Samsung Electronics Europe Logistics located in Breda, The Netherlands (Yelp Entry)

samsung.sepm – Likely Samsung Electronics Poland Manufacturing (SEPM) (Wiki Page) – Samsung Slovakia (Web Site) – Sony Japan – Sony Americas – I didn’t know anything about the Gauselmann Group.  Know I do!  The “gg” sub-domain likely refers to a subsidiary Gebrüder Gauselmann (source) who has their own domain (  The subsidiary focuses on the development of gaming/gambling electronics and equipment.

The Gauselmann Group is a family-run, internationally active company for the entertainment and leisure industry. In addition to the development, production and distribution of entertainment gambling and money management systems, the Group operates the well-known casino chain CASINO MERKUR-SPIELOTHEK. In addition, the Gauselmann Group is also active in many other areas, such as sports betting, online gaming and gambling.” (Source) – The well-know virtualization software developer – Intel Corporation Germany – Intel Corporation US and Canadian Region (source) – The Windows development network; and older domain dating back to NT kernel development.  (source1, source2) – Well known network-centric equipment manufacturer – An internal domain for the European telecommunications company O2.  This is likely the subdomain for UK operations (source1, source2) – Vodafone is a global telecommunications company. The VF likely refers to Vodafone and the ES likely refers to Spain – there is some data relating this domain to strengthening that assertion (ref1, ref2).

linksys – well-known network equipment manufacturer – Well-known technology company focused on printers, projectors wearables, robots, etc. – Electronics manufacturer – Taiwanese organization -DVRDNS is another name for DynDNS the dynamic networking service.  No information on infoview2u – An internal domain related to Akamai, the well-known internet technologies company.  Several Autonomous System Numbers (ASNs) associated with Akamai (e.g., AS18680) are registered as Akamai DFW Technologies Inc. (source) – quick analysis didn’t reveal any intelligence on this domain but clearly related to the largest consumer email service run by Google, gmail. – Well-known network equipment manufacturer – Domain related to online certification and assessment company Gauge.

Comments on US-CERT Grizzly Steppe Enhanced Analysis Report

On 10 February 2017, US-CERT released “Enhanced Analysis of GRIZZLY STEPPE Activity” (AR-17-20045).

Overall Quality

This report differs from the first (GRIZZLY STEPPE – Russian Malicious Cyber Activity) and is not as much an update but, rather a companion.  This report does not suffer from the same problems in the first release.   The first release tried to satisfy a poorly executed joint statement and satisfy a range of audiences (e.g., mixing geopolitical topics with technical details).  This report does not suffer from the same flaw.  Instead, it focuses on network defenders using a layout and language common to that community.   The lack of any press release around this report is likely because of its lack of applicability to a broader audience – and no earlier joint statement.

The quality of this report is much higher, potentially illustrating more care and learning the lessons from the first reports.  Second, it does not suffer from the same overbearing editing and joint agency review process which was clear in the earlier releases.

Network Defense Value

Technically, the most valuable addition is a well-documented kill chain narration allowing defenders to better understand the behaviors of these adversaries and develop proper detection and mitigation tailored to their own environments.  As always, deploying the detection rules of others is a danger and each org will have to assess those included.

The mitigation recommendations are general but match up well with the described adversary behavior – and an organization including these adversaries in their threat model should take them seriously.  Other organizations should probably continue following Australia’s “Strategies to Mitigate Cybersecurity Incidents” especially the “top 4” and “essential eight.”

Leaving Microsoft After 3 Years

After over three years I’ve left Microsoft to pursue two amazing opportunities.

Some Words About Microsoft

Sergio on Microsoft campus in Summer 2016

When I started at Microsoft my mother said to me, “Microsoft?!  I thought you hated them.”  She was right – pre-2003 Microsoft didn’t have their security act together and it frustrated me forming a poor opinion of the company.  However, not only has that changed but they are one of the most advanced and important companies in the security space.  Many people still talk down about Microsoft security, but I can tell you – that crew contains some of the smartest and hardest working security professionals I know – and become just as frustrated when things don’t go perfectly.

When considering joining in the first place a good friend, John Lambert, sold me easily on Microsoft – “Microsoft controls the physics.”  Controlling the physics means that for a large part of the world’s computers, adversaries only operate within the parameters of Microsoft products and services. Microsoft can and does make it harder for adversaries to operate at a global level.  Very few other companies can. This is powerful.  As a security professional within Microsoft, you can influence the security of billions of customers.

I’ve come away from Microsoft learning how to ship product, met hundreds of C-suite executives and learned their perspectives and challenges, learned approaches for security analytics in REALLY BIG DATA, and made the internet just a little safer for billions.  I cannot recommend the company enough for security professionals.  They have big-company challenges, but their family-friendly and mission-focused culture is unique.  You can do big things and also enjoy a life outside of work.

Why I left – Where I’m Going

I’m driven by simple motivation: do as much good as possible.  Obviously, at Microsoft, I affected the security of billions.  But, other problems abound.  Currently, I see two major threats to humanity requiring my attention: threats to critical infrastructure and threats to human life from human trafficking. I’m now working part-time on both problems.

Global Emancipation Network

In March 2016 I began serving as Technical Director for the non-governmental organization (NGO) non-profit Global Emancipation Network.  Human trafficking is a massive human rights issue.  At least 20 million and as many as 50 million are enslaved globally but only 77,000 rescued per year.  This devastating gap that must close.  But, there is an opportunity.   Like any other business, human traffickers use the internet to increase their effectiveness and efficiency – as well as a vulnerability we can leverage against them.

The NGO collects global data on human trafficking on the internet and leveraging analytics and big-data approaches enable intelligence and operations to stop traffickers and rescue victims.  Interestingly, combating traffickers on the internet and hunting hackers are very similar and we’re using many of the same techniques in both domains.  I’m excited to have the opportunity to spend more time on this problem and save millions of lives.


Sergio next to Dragos ICS equipment

My second, equally amazing, opportunity began on 1 January 2017.  I joined Dragos, Inc. as Director of Threat Intelligence and Analytics.  Dragos develops solutions to secure industrial control systems.  A heavily underserved but massively important domain.  Industrial control systems underpin all of the networks and systems running the most critical functions such as power, water, and sewer, not to mention the many hundreds of important domains such as pharmaceutical manufacturing.  These networks and systems enable civil society and are usually classed as life/safety critical systems.  When they fail, people die, services cease, chaos abounds.

I’m proud to join Dragos working to safeguard civilization.  I’m hunting threats targeting and affecting critical infrastructure while delivering the intelligence necessary to enable good decision making.  I’ll work to cut through the FUD surrounding critical infrastructure threats and empower the ICS operator and security community with fact-based knowledge and perspective.

Enabling Skype Two Factor Authentication

Skype is an important communications medium.  This importance also lends itself to targeting by adversaries.  Two factor authentication is one of the most important defensive actions you can take to prevent successful attacks.  Therefore, you should enable two-factor authentication for Skype!

However, this is not straightforward.  Here’s how you do it.  THE THIRD STEP IS THE MOST IMPORTANT – otherwise an adversary can bypass two-factor authentication by logging in via the Skype name still.

  1. Link Skype and Microsoft Account
  2. Enable two-factor authentication for the associated Microsoft account
  3. Disable login via Skype username (via this sign-in preference panel)

State-Associated Hackers Target Me – Here’s What To Do When They Come After You

You were just told state-associated hackers attacked your account.  Congratulations!  You’ve joined a select club and your work has received recognition.  But, probably not of the variety for which you were hoping.  According to Google, less than 0.1% of accounts unlock this achievement.  But, what do you do now?!!!

I’ve attracted the same attention over my career studying and countering targeted threats along with my work at the Global Emancipation Network disrupting human trafficking on the internet simultaneously pissing off international organized criminal groups.  I’ve also notified and counseled countless victims of state-associated activity.

I’ll tell you what I’ve done in this situation.  Hopefully you can apply some of the lessons in your own situation.  However, this isn’t going to work for everyone which will need to take their own mission and specific situation into account.  Further, I will try to dispel some myths and FUD which surrounds detection of state-associated attacks

Don’t Panic

I can’t understate this – DON’T PANIC.  You must get over your initial feelings of unease, disgust, anger, and possible fear.  Panic makes for bad decision-making, and that’s exactly what NOT to do in this situation.  Here is what you need to understand to help you make good decisions…

Myth 1: There is nothing you can do and you’re on a kill list

Reality: While there have been cases of activists targeted through their digital accounts which likely led to their death, this is extremely rare.  And, you will already know if this is a likelihood in your specific case.  Usually, they’re just interested in the following three elements:

  • Intelligence about you and your business dealings
  • Access to your contacts (to know them and further use your account to compromise them)
  • Access to your organization(s) (to use your access to compromise enterprises)

You are not without recourse and effective defense.  Those who claim that state-associated adversaries will always win are wrong.  Yes, they pose a difficult problem, but any hyperbole beyond that is incorrect.

Myth 2: If they wanted access to your account(s) and computers, they’d already have them.

Reality: Huh?  That makes no logical sense.  If they already had access how did they get access in the first place?  This is just defeatism and has not grounding in reality.  The truth is that adversaries attack not just once but MANY times.  It’s likely that if you’re a target you will remain a target.  Further, it’s likely that you are target for more than one adversary.  Also, let’s say you were previously compromised.  The adversary may have lost their access (due to either action/inaction on their part or yours) and needs to regain access.  So, DON’T YOU DARE GIVE UP NOW.  You have been given a new opportunity to defend yourself.  TAKE IT.

Myth 3: You were specifically targeted

Reality: Yes – you made it on a list.  But, you are probably one of THOUSANDS targeted in a single campaign.  Probably one of TENS of THOUSANDS targeted by that adversary this year.  It’s very likely that if the adversary doesn’t succeed you will remain on their target list for a long time and receive many attacks over time.  You will now need to remian constantly vigilant. But, it remains that you were probably not singled out.

Myth 4: You should have already taken action

Reality: Yes, taking defensive action before an attack is worth much more than action take afterwards.  However, that assumes a world that doesn’t exist.  None of us, not even the greatest security researcher in the world, takes perfect precautions.  However, assuming they were successful it doesn’t mean we give up and cede the battlefield – we fight back!  We retake what is ours.

Myth 5: State hackers can’t be detected and never attack where they will be detected

Reality: Hackers are driven by motivation to succeed in their mission.  If they don’t succeed they don’t get paid, don’t get promoted, maybe see a firing squad.  They will ultimately attack a victim via whatever method will work.  Effectiveness will almost always outweigh potential detection.  Now, I caveat with saying each adversary and their operation will contain a different risk model and so this won’t hold for EVERY operation, but for most.  So, yes, they will attack you via LinkedIn messages, Gmail, your organization’s email account, Facebook messenger, etc.  They will sometimes be caught.  How do I know?  Because they’re caught all the time.

Assess the Situation

You alone will be able to assess the situation.  What is your business or role?  Will your life be in danger?  What information is possibly compromised?  Do you know if the adversary was successful or was it just an attempt?

Use all the intelligence you have about yourself and the adversary to understand the right actions to take.

Inform Your Community

Tell others!  Tell your community!  The adversary likely targeted not just you but many others within your community, enterprise/organization, or area of interest.  By informing others you empower them with situational awareness for them to learn from you and hopefully strengthen their own defense.

Note: Those either victim-blame or otherwise talk down about those publicizing their attacks are working against the community defense and helping the adversary by stigmatizing attacks.  Our community should actively call this out and STOP IT.

Myth 6: Telling others informs the adversary you know

Reality: First, you need to understand that the adversary is not scared of you and their hubris likely keeps them from recognizing you as a capable actor.  If they find out you know they will discuss that while laughing over beer.  But probably nothing else will happen.  You will need to make a personal intelligence/gain loss decision here based on your own assessment.

However, DON’T USE A COMPROMISED COMMUNICATIONS CHANNEL TO DISCUSS THE ADVERSARY!  Use another channel.  This will likely reduce any risk.

Get Help & Defend Yourself

  1. If you don’t know what to do, or even if you think you do, get some help and advice.  Some places to turn are ProPublica and CitizenLab.
  2. Turn on 2 factor authentication – ON EVERY ACCOUNT POSSIBLE.  Hardware-based 2 factor (e.g, Yubikey) is nice and the best choice, but any is better than nothing.
  3. Reset passwords.  If possible, from a computer you don’t normally use.
  4. Set alternative notification method. If possible, set up an alert when an account receives a successful or unsuccessful login attempt)
  5. Check email accounts for any forwarding rules.  Some adversaries create forwarding rules once they gain access to the account to persist access.
  6. Check account logs.  Some providers allow you to examine account access logs.  Don’t just look at the location but also the time and method of access.  This can give you an indication if compromise was successful.
  7. Rebuild computers you use on a regular basis.  This includes phones.
  8. Increase use of encrypted communications (PGP, Signal, etc.)
  9. Start encrypting your data at rest (when stored on a drive) to prevent value being extracted after exfiltration

IMPORTANT: If you suspect successful compromise, don’t use the compromised machine.  Get help to investigate and remediate.  Unplug or turn off the machine.  Otherwise, just rebuild from a clean source.

Remain Vigilant

Once targeted, the adversary will likely target you again in the future.  Periodically conduct the defensive steps above and remember those below:

  • Be careful of those who request access to your social network, such as LinkedIn connect requests
  • It’s okay to suspect items from colleagues or illustrious organizations and ask for verification via another channel
  • Keep your applications and OS up to date with patches
  • Browse sites wisely
  • Rebuild regularly both phones and computers
  • Don’t enable macros

2016 Targeted Threats in Review – What We’ve Learned

In 2016 we’ve seen some amazing targeted threat events – election disruption, major financial fraud, more zero-days, new groups uncovered, etc.  Information security tends towards “firefighting” mode; always concerned with the latest or newest.  However, too easily we fall into “recency bias” where we falsely give most recent events the greatest weight in our mental model.  So, to break that – I take time each year to look back over to the threats we saw earlier to correct my perspective.  The most interesting items I find are those which challenge and break assumptions causing me to reevaluate my models.  They’re the events which surprise and impress seasoned analysts.

This year I saw items fall into three categories:

  • Victimology & Intent – While targeted threats continue to conduct traditional espionage activities against both government and industry, both their victimology and intent are shifting slightly to better align with a changing motivation.
  • Capabilities – Adversaries continue to value effective over all else to support their operations (this year macros and increasing evasion techniques)
  • Duller Objects” – I call them the “dull objects” because they’re not shiny and don’t usually get much attention.  Yet, they can pose as much a threat as any shiny object and act as bellwethers.

Victimology & Intent

US Industry Victims

bluehat_shifting_victimsIn September 2015, the US and China agreed that neither would “conduct or knowingly support theft of intellectual property.” Fireeye reported in “Redline Drawn” that there has been a significant draw down of US-targeted operations since at least 2014.  But, while there has been a decline, they could not solely attribute it to the agreement.  Similarly, Symantec reported that in 2016 a notable threat group previously operating heavily against US targets, Buckeye, ceased US operations and shifted strongly towards Hong Kong.

What we learned: While no public smoking gun exists that the US-China agreement had a direct effect on US victims, there is a clear shift in operations according to Fireeye and Symantec.  This illustrates that even after the long-standing “China hacking US” narrative driving a majority of targeted threat protection investment – things can shift.

Swift Attacks

swift_attack_slideIn February 2016 an adversary leveraged access into the Bangladesh Central Bank to defraud other banks of $81M – attempting fraudulent transactions over $1B.  Symantec correlated the attack to the Lazarus group based in the DPRK via malware analysis and similarity to that used to attack Sony Pictures.  SWIFT is the backbone of the global financial network, supporting over 6 billion messages/year between over 200 countries and territories.  However, thanks to a small typo by the adversary in the name of the non-profit supposedly benefiting from the transactions, an analyst caught the error and found the fraud allowing SWIFT and the banks to unravel the operation.  Later, analysts discovered further fraudulent activity possibly implying that either more than one group is active in the network or the Lazarus group had deeper or wider access than initially thought.

What we learned: If we trust Symantec’s analysis and attribution to the Lazarus Group, it presents a significant event using state-enabled cyber actors to conduct global financial fraud.  This could cause other financially struggling states to look to their cyber capabilities to enable financial fraud and increases the scope of victimology for some groups which fit this profile.



Probably the biggest security story this year – and by far the biggest targeted threat ever in the public dialog: hacks into the Democratic National Committee (DNC), Democratic Congressional Campaign Commission (DCCC), World Anti-Doping Agency (WADA), and many others during a tumultuous year involving the US presidential election and the 2016 Olympics in Rio de Janeiro.

Traditionally, targeted threat activities like STRONTIUM exploit victims to gather intelligence and use it quietly avoiding all attention.  They rarely use the intelligence publicly.  In this case, they posted (at least some of) the intelligence publicly in large dumps.  However, this is not the first example of such activity.  Previous examples of this type of activity include dumping Sony Pictures emails and the Syrian Electronic Army (SEA) dumping Forbes’ databases.

This, alongside other activity including intrusions into state election offices made headlines and potentially affected US elections.

What we learned: STRONTIUM reinforced that the value of intelligence is not what you collect, but how you use it.  Like the Sony email dump, this likely points to a trend over time of dumping intelligence and not hording it.  Especially in cases where the greatest harm will come ultimately from disruption.  This will further reinforce the relationship between cyber activities and traditional information operations.  Read more about STRONTIUM.


Macros – Macros Everywhere


Relegated by many to the past – macros came back this year with a vengeance.  Reported by both Microsoft and Trend Micro, macro threats rose significantly during 2016.

This was not limited to commodity threats (such as ransomware) but also adopted by targeted threats as well.

Luckily, Office 2016 included new techniques for administrators and defenders to block macro attacks.

What we learned: I always teach my students, “what was old will be new again” in the threat landscape.  It is critical that professionals study the threats of old because they, or some variant, will likely return.  However, often our hubris tends to dismiss the threat because they’re not as “cool” as the newer techniques.  But, as we know – adversaries care less about newer techniques then they care about effectiveness.


evasion_slideAnti-forensics and sandbox evasion are not new.  However, as the security industry has matured and caused the life-cycle of many threats to shorten – malware authors are increasingly using sandbox evasion throughout their entire kill chain.  Because network defense has had to increasingly rely on automated techniques (such as dynamic malware detonation) to protect networks because of the failure of static signatures – malicious capabilities naturally respond to the defensive environment to disrupt automated techniques.  Lastly, security researchers are increasing effective at discovering and uncovering operations and pose an equal harm to malicious activity.

For targeted threats, this includes not only the long-term Trojan or backdoor, but the entire operations chain through the first stage.  It is now common to encounter JavaScript reconnaissance profiling the victim with some back-end logic determining whether they’re “worthy” of receiving the dropper or later stages.  This victim profiling include the locale of the host, domain affiliation, hostname, etc.  Further techniques include just-in-time decryption and modularization to prevent the full capability from being captured.

What we learned: This reinforces our understanding that adversaries will always react to the security environment.  Elements which harm their operations, such as security researchers or automated analytics, will always be countered over time.

The “Duller Objects”



PLATINUM is a fascinating activity.  Their operations are like a text-book for targeted threat: zero-day usage, long-term persistence, novel DLL injection, time-limited operations to prevent discovery, anti-forensics techniques, political targeting, etc.

What we learned: Not only did we learn some fascinating new techniques about targeted threats to bolster our defenses, PLATINUM also taught us that a threat operating since at least 2009 can still be newly discovered in 2016.  This means that there are likely many more targeted threats operating than cataloged by threat intelligence analysts.

Targeted Threat As A Service

paytoplay_slide What if you’re an actor with little-to-no native capability?  Yet, you need intelligence collected from digital targets?  You go and buy it.  2016 saw some of the most advanced targeted threats as a service emerge.  Citizen Lab uncovered and documented an amazing history of targeting and exploitation against a human rights activists in the Middle East.  Particularly, this exploitation caused one of the largest security patches to Apple’s iOS using three separate vulnerabilities.  The group responsible is NSO Group from Israel.

Of course, this is not new – we’ve previously known about FinFisher and HackingTeam.  Their commercial hacking services target a range of victims.  Their hording of zero-days is also a target for others and a vulnerability for all – as seen in the HackingTeam dump.

What we learned: It is amazing how little coverage of these threats exist – yet the companies have public facing profiles and usually disclose their activities on their website.  More concerning, lesser capable states tend to be those with the worst human rights records and so use this commercial technology to gather intelligence on activists and others.  If we care about protecting the most vulnerable using our capabilities, these services would receive greater attention by the community.

Building Threat Hunting Strategies with the Diamond Model

Hunting cyber threats (especially those never seen previously) is the most expensive and difficult threat intelligence endeavor.  Hunting is a risk because you’re betting that there is something there to find – and that you can find it.  An effective hunter may come up empty most of the time.  Creating an effective threat hunting strategy ensures greater chances for a return on the investment.

An effective strategy includes answering four critical questions and employing the right approach to achieve the goal.  The Diamond Model identifies several “centered-approaches” enabling effective threat hunting.  Tying these approaches together creates the basis for a hunting strategy.  Without a strategy your chances of failure increase dramatically.

Hunting cyber threats is the most expensive and difficult threat intelligence endeavor.

Building a Hunting Strategy with the 4 Hunting Questions

Throwing out “I’m going threat hunting” is akin to saying, “I’m going fishing.”  Both are such vague phrases that they generally require a follow-up question: “For what?”  Some may answer “malware” or “lateral movement” the same as others answer “salmon” or “bass.”  The next question asked, naturally, “where?”  This leads us to the first critical element of a hunting strategy: answering the critical questions.

If you can’t answer these questions well.  You might as well go back to what you were doing because you’ll likely end up just wasting time and resources.  Hunting requires patience and discipline.  These four questions are the core of any hunting strategy.

The 4 Hunting Questions

There are four critical questions necessary to build a hunting strategy, and they’re best answered in this order:

  1. What are you hunting?
    • Hunting is expensive and risky.  You must narrow down exactly for which activity you are hunting.  Is it exploitation?  Is it lateral movement?  It is exfiltration?
  2. Where will you find it?
    • What you are hunting will determine where you will find the activity.  You must next narrow down the sources of telemetry which will provide visibility into the activity AND access to stored telemetry
  3. How will you find it?
    • Once you’ve identified what you’re looking and where you’ll likely find it, next you must identify the tools to hunt.  You don’t catch salmon and bass in the same way – you won’t catch exploitation and lateral movement in the same way.
  4. When will you find it?
    • Have a time bound for your hunting.  A never-ending chase will lead you nowhere.  Allot a specific amount of time necessary to achieve your goal, and if you come up empty at that time – move on to the next target.  If you have to feed your family, and you go out salmon fishing but catch nothing – it’s probably best to instead go after another fish or game before everyone dies of starvation 🙂  Likewise, management may likely lose patience with your hunting if you don’t deliver value.

From Strategy to Approach

Once you’ve answered the four critical hunting questions – you must then design the approach.  The approach not only describes the modes and methods of your hunting but, more importantly, addresses the “why.”  The “why” establishes your hypothesis.

Hunters must build and test many hypotheses at once.  Each failed hypothesis can lead to a failed hunt.  For instance, the hunter hypothesizes that they’re breached.  Why else would they be hunting?  Of course, if they’re not – the hunt fails.  The hunter hypothesizes the adversary leverages identities to move across assets.  So, this hypothesis leads the hunter to examine Active Directory logs.  Of course, if the adversary uses file shares they may not show up in AD – the hunt fails.

This step is critical because hunting is a big risk and cost.  And, establishing not just the “how” but also the “why” will help hunters critically examine their approach and look for other methods possibly overlooked.

When hunting adversaries you must always question your approach and look for more creative and effective methods.

The Diamond Model Centered Approaches

The Diamond Model establishes the event as the most basic element of any malicious activity and composed of four core features: the adversary, the victim, infrastructure, and capability.  All malicious activity contains these features (as established in Axiom 1).  Therefore, any hunting is ultimately based on these features and any hunting approach contains a mix of these “centered approaches.”

However, don’t consider these approaches in isolation.  Instead, a mix of approaches used in concert achieve greater coverage.

The Diamond Model of Intrusion Analysis. An event is shown illustrating the core features of every malicious activity: adversary, victim, capability, and infrastructure. The features are connected based on their underlying relationship.

The Diamond Model of Intrusion Analysis. An event is shown illustrating the core features of every malicious activity: adversary, victim, capability, and infrastructure. The features are connected based on their underlying relationship.

Named for the feature on which they’re based, the approaches are:

The Victim-Centered Approach

The news of several determined adversaries targeting a single human rights activist is an excellent example of the victim-centered approach.  A victim-centric approach uses the victim as the central element for hunting and looks to illuminate the other Diamond-connected features (i.e., capabilities, infrastructure, adversaries).  The victim-centric hunt is equivalent to a “honeypot.”

Network defenders will most likely focus on the victim-centered approach.  It provides the greatest benefit and easiest approach with the highest likelihood of actionable results.  There are many modes and methods provided by this approach.  Chris Gerritz (@gerritzc) details several victim-centered approach modes and methods in his post: Approaches to Threat Hunting.

Advantages: catches many adversaries, many hunting opportunities (e.g., network attacks, malicious email delivery, etc.), easily obtained data (usually)

Disadvantages: possible overwhelming amount of malicious activity, too many hunting opportunities can dilute an undisciplined hunting effort

Tips: focus hunt on a single phase of the kill-chain at a time

See Diamond Model Section 7.1.1

An Example Victim-Centered Hunting Strategy

[Why] We hypothesize that several adversaries target a specific victim.

[Why] We further hypothesize that adversaries deliver their capabilities via email (as most do).

[Why] Our hypothesis is strengthened through data that most attacks are delivered via email and our organization has previously received email-borne delivery attacks.

[What] Our hunting goal: collect intelligence on adversary attacks in the email delivery phase.

[Where & How] Therefore, our victim-centered hunting approach includes gaining visibility into the victim email and apply tools which illuminate likely malicious elements (links, attachments).  Our primary method will involve detonating attachments and hyperlinks.  Our secondary method will involve sender-receiver graph analysis and header inconsistencies.

[When] We will apply this approach and methodology for 2 weeks after achieving access to data.

This hunting strategy reveals:

  • Capabilities: the tools and techniques used by an adversary to compromise and operate against a victim (e.g., in our example: the malicious attachments)
  • Infrastructure: the logical and physical elements necessary to manage capabilities (e.g., in our example: the email source, malicious attachment C2, URLs)

The Infrastructure-Centered Approach

While network defenders will generally take the victim-centered approach.  That’s not the only hunting approach available.  The infrastructure-centered approach enables hunters to identify malicious infrastructure and possibly pivot to identify capabilities, victims, and more infrastructure.  Most importantly, because generally infrastructure must operational before capabilities and victims connect – new infrastructure can provide preemptive defense.

There are several methods to leverage this approach.  Depending on access and visibility some are easier than others.  For instance, one method is to monitor domain name servers known to host malicious domains.  Another may be to monitor all new domain registrations for a known pattern used by an adversary.

Another popular method is SSL certificate chaining.  PassiveTotal has written a post, “Harnessing SSL Certificates Using Infrastructure Chaining” detailing the method.  Mark Parsons (@markpars0ns) has a great presentation on “Hunting Threat Actors with TLS Certificates.”

Lastly, and one of the most difficult is direct observation of malicious infrastructure.  This could be done through a service provider – or via infrastructure take-over (such as a sinkhole).  Through this method, significant intelligence can be gained including: capabilities used through the infrastructure, victims contacting the infrastructure, and potentially other related infrastructure.

Don’t forget about the opportunities to use the Diamond Model to chain multiple approaches together.  For example, after discovering new infrastructure an analyst is able to pivot an ask for additional information about Diamond-connected features, such as capabilities.  This might be through pivoting across a malware zoo like Virus Total for any reference to the infrastructure.

Advantages: Good tools exist to support the approach (PassiveTotal), finding infrastructure prior to operational use provides preemptive defense

Disadvantages: Limited data access, findings not relevant to many organizations

Tips: Data, Data, More Data

See more in the Diamond Model Section 7.1.3

Example Infrastructure-Centered Hunting Strategy

[Why] We hypothesize adversaries establish infrastructure prior to operations

[Why] We hypothesize adversary X continues to structure their domains using the pattern badstuff-<victimname>.com

[Why] We hypothesize adversary X continues to use the name server to host their infrastructure and the same

[What] Our hunting goal: monitoring the name server for new names matching the pattern we may find new names prior to their operations providing proactive defense.  Further, because the adversary uses the victim name in their domain we will likely identify victims.

[Where] The name server

[How] Monitor the name server by querying the server every morning for all domains and identifying the domains not seen the previous day.  Further, looking for any domains on the server with the known pattern.

[When] We will leverage this strategy for 1 month to provide for any dips in adversary activity during that period

The Capability-Centered Approach

Aside from the victim-centered approach employed by most network defenders, the capability-centered approach is the second-most popular.  This is largely due to the broad accessibility of a massive malware zoo – VirusTotal.  If VirusTotal didn’t exist, this approach would likely be limited to only anti-virus vendors and others with potentially large collections of malicious binaries.

The capability-centered approach focuses on discovering intelligence from adversary tools – namely “malware” (but the category is larger than malware and includes legitimate tools used illegitimately).  The most advanced hunters using this approach take advantage of the VirusTotal retrohunt feature enabling analysts to run YARA rules over the VirusTotal zoo looking for lesser known samples.

Advantages: easy access to large malware library (VirusTotal), easily written analytics (YARA)

Disadvantages: without your own malware zoo – limited to VirusTotal features

Tips: take advantage of VirusTotal

See more in the Diamond Model Section 7.1.2

Example Capability-Centered Hunting Strategy

[Why] We hypothesize that network defenders share adversary capabilities via VirusTotal

[Why] We hypothesize that we can identify unique malware via a malware zoo using static analysis

[What] Our hunting goal: find undiscovered malware and its associated command and control (C2) channel to feed host- and network-based detection to enhance protection

[Where] VirusTotal

[How] Author and execute YARA rules over the VirusTotal data and monitor the zoo daily for new samples meeting our criteria

[When] We will author and improve rules for 2 weeks and run them perpetually

The Adversary-Centered Approach

The adversary-centered approach focuses on visibility on the adversary themselves – meaning few organizations have the requisite visibility.  Usually, limited to only service providers and those with extraordinary capabilities.  However, achieving visibility directly on the adversary themselves generally provides tremendous, almost perfect, insight.  This includes infrastructure creation and management, capabilities (sometimes those in development), attribution details, and at times victim information.

However, others may access some methods within this approach.  For instance, knowing an adversary persona may allow an analyst to leverage open source intelligence (OSINT) to track the persona across sites potentially gaining insight into operations.  Further, an analyst may leverage adversary operations security (OPSEC) mistakes to achieve attribution based on their persona.  ThreatConnect’s CameraShy work illustrates the adversary-centered approach to achieve attribution through persona development and tracking.

However, while this approach leads to “newsworthy” items regarding attribution – their direct application to network defense is limited.  Therefore, generally only those with a vested interested in attribution leverage this approach.

Advantages: possible adversary attribution, deeper visibility into

Disadvantages: the most difficult approach requiring significant (and sometimes extraordinary) visibility or adversary mistakes, does not generally result in actionable intelligence, adversary “false flag” mis-attribution may trip up undisciplined analysts

Tips: leverage OSINT and pray for a mistake 🙂

See more in the Diamond Model Section 7.1.4

Example Adversary-Centered Hunting Strategy

[Why] We hypothesize adversaries use personas to register malicious domain names

[Why] We hypothesize that some of these domain registration aliases relate to real people

[Why] We hypothesize that adversaries have mistakenly tied their operational alias to their real personas revealing their personal details

[What] Our hunting goal: uncover the real people behind malicious domains providing attribution

[Where] Domain registration data and other open sources

[How] Take 500 known malicious domains, extract aliases details from their registration, pivot across open sources looking for correlation with real people

[When] Spend 3 days collating known malicious domains, 2 weeks pivoting across open sources

Why Threat Intelligence Sharing is Not Working: Towards An Incentive-Based Model

The juggernaut known as the “threat intelligence sharing imperative.”  Security and industry conferences fill their time with “sharing.”  How many sharing groups and platforms do we require?  Too many exist.  Alien Vault recently reported that 76% of survey respondents reported a “moral obligation to share threat intelligence.”  McAfee says sharing threat intelligence “is the only way we win” (that isn’t even remotely true).  However, it’s not working.

According to Robert Lemos in eWeek, even with the most recent US cyber security legislation providing legal immunity organizations are not rushing to share.  The reason is simple.  That was only one component of a complicated problem.  While the legislation addressed one policy element, it didn’t address that sharing has never been proven (with data) to benefit sharing organizations.

We must move beyond these “religious” arguments and provide clear incentives for defenders to share.

In January, President Obama signed the Cybersecurity Act of 2015, but companies remain in a holding pattern, waiting for legal clarity and demonstrable benefits before sharing sensitive information.

– Robert Lemos, eWeek “Cyber-Threat Data Sharing Off to Slow Start Despite U.S. Legislation” [2016-10-02]

The Loudest in the Room

There is one thing I notice – security vendors yell the loudest about sharing. I don’t claim their sharing narrative is FUD, but the sharing narrative is a net positive for them.  The more data and intelligence they receive strengthen their products and services adding value to their organization. Security vendors have strong incentives to promote threat intelligence sharing.  But, what is the case that the cost of sharing to defenders is a net benefit to them?

Security vendors have strong incentives to promote threat intelligence sharing.  But, what is the case that the cost of sharing to defenders is a net benefit to them?

Sharing is Costly

I’ve been involved in threat intelligence sharing for a long time.  I am the first to support the notion of sharing.  I have story up on story which supports the sharing narrative.  But, I qualify my support: the value of sharing must exceed the cost.

Most network defenders will agree: sharing is costly.

  1. It requires significant cost to integrate externally shared threat intelligence effectively.
  2. Once you consume that threat intelligence you quickly discover it may consume your security team with poor quality – and requires significant tuning.  There is risk.
  3. Establishing a sharing mechanism, program, and process is costly.  It usually requires engineering effort.
  4. Management support for sharing usually requires political capital from network defense leaders.  They must prove that the resources spent on sharing are more important than the 20 other components competing for resources.  Also, let’s not forget about the legal support.

An Incentive-Based Approach

Sharing must go beyond a “religious” argument.  Instead, we must take an incentive-based approach.  We must create and promote incentives for defenders to share – with demonstrable results.  Therefore, those promoting sharing must provide a coherent and consistent data-driven case that sharing overcomes these costs to defending organizations.  “Share because it is good for you” is not enough.

So, next time you advocate for sharing – enumerate why network defenders should share.  Make it meaningful.  Make it data-driven.

4 Qualities of Defensible Products – Secure Products Are Not Enough

For decades the industry worked to build secure products: products which can withstand attacks usually by reducing vulnerabilities and exposures.

However, what happens when that fails and an attack is successful in spite of the work done to secure the product?  I propose that we require both secure products AND defensible products; products which not only resist attacks but successfully defended when attacks bypass protection.

4 Qualities of Defensible Products

  1. Visibility – the visibility necessary to detect unauthorized use and malicious attacks
  2. Transparency –  the transparency into the product’s operations to conduct a proper investigation and response after detection
  3. Controls –  the controls necessary to remediate a threat after detection and investigation
  4. Resilience – a product returns to an working state quickly after remediation (or remain operational during an attack)

Indicators and Security Analytics: Their Place in Detection and Response

Indicators for research and response; analytics for detection

Indicators of Compromise (IOCs), the lingua franca of threat intelligence.  Almost every intel sharing conversation begins and ends with indicators; commercial intelligence platforms revolve around them; most intelligence consumers end their interest there.  Does a better way exist?  Security analytics!

The Problem with Indicators in Detection

For all the focus given to indicators we know that they have the shortest lifespan of all intelligence exhaust (see the Pyramid of Pain by David J. Bianco).  In many cases, we see single use or victim specific indicators making sharing of these useless.  In general, adversaries tend towards shortening the indicator lifespan – or removing them; for instance Locky recently transitioned to hardcoded RSA keys to remove the vulnerability of connecting to a command and control (C2) server.

Broad based indicator sharing is fraught with problems.  First, it assumes that the same indicators will be leveraged against multiple victims.  This is certainly the case for some threats.  But not all.  Second, quality will likely be a problem.  For instance, DHS Automated Indicator Sharing (AIS) states:

Indicators are not validated by DHS as the emphasis is on velocity and volume: our partners tell us they will vet the indicators they receive through AIS, so the Department’s goal is to share as many indicators as possible as quickly as possible. However, when the government has useful information about an indicator, we will assign a reputation score.   – DHS Automated Information Sharing

Further, AIS contributors can choose to remain anonymous.  Think about the problems of blindly consuming thousands of non-validated anonymously sourced indicators.  How exactly do you effectively validate an anonymously contributed indicator?  Previously, I wrote on the cost of poor intelligence.  Just one instance of by an anonymous contributor could cause massive issues.

Indicators of Compromise are only threat exhaust –  the necessary by-product of malicious activity.  Short-lived and increasingly single use, indicators pose a poor basis for detection – and it’s getting worse.  I’m not advocating for throwing indicators out entirely – they serve their purpose, but should not form the entire basis of threat intelligence detection.

Analytics For Detection

As the Pyramid of Pain suggests, we must move towards behavioral based detection focusing on whole classes of threats.  I’d much rather rely on an analytic detecting overwriting Windows registry keys for a “sticky keys” attack than hoping someone shares an IP address of a random hop point used before to remote desktop (RDP) into a host.  In the analytic case I catch every adversary using sticky keys, in the case of the indicator I catch only one adversary – with the hope they use the same infrastructure again.

Where do you find analytics?

  • The best place is your red team – ask them to describe their techniques and procedures.  Read their reports!  (I know – a stretch for some)
  • Read threat intelligence reports on adversary behaviors.
  • Ask your threat intelligence provider!  (Who you already abuse with information requests anyways – right?)
  • Check out MITRE’s Cyber Analtyics Repository.

The Place for Indicators – Research and Response

Indicator sharing works within a small group of organizations that share a “victim space” (as the Diamond Model refers to victims with shared threats).  This greatly increases the value of shared indicators because the likelihood of attackers reusing indicators increases.  However, indicator sharing outside the “shared victim space” reduces their value and increases their cost.  Research and response receive the greatest value from shared indicators as it allows a method of communicating observables discovered in attacks allowing analysts to pivot deeper into malicious activity seen by others.

Your Own Intelligence is the Best

In the end, to achieve greater detection capability organizations must invest in security analytics and reduce their reliance (and insistence) on indicators from externals.  The best indicators in the world are those from your organization’s own telemetry – your own threat intelligence is the most relevant.  Otherwise, look suspiciously at indicators from others and instead ask to share analytics!

Note: Security analytics are a dirty word – overused and often misused.  To be clear, I define analytics in this post as indicator-independent behavioral detection derived from the knowledge of bad stuff (i.e. Threat Intelligence)

Page 1 of 5

Powered by WordPress & Theme by Anders Norén