Always A Bad Day For Adversaries

Category: Strategy

2016 Targeted Threats in Review – What We’ve Learned

In 2016 we’ve seen some amazing targeted threat events – election disruption, major financial fraud, more zero-days, new groups uncovered, etc.  Information security tends towards “firefighting” mode; always concerned with the latest or newest.  However, too easily we fall into “recency bias” where we falsely give most recent events the greatest weight in our mental model.  So, to break that – I take time each year to look back over to the threats we saw earlier to correct my perspective.  The most interesting items I find are those which challenge and break assumptions causing me to reevaluate my models.  They’re the events which surprise and impress seasoned analysts.

This year I saw items fall into three categories:

  • Victimology & Intent – While targeted threats continue to conduct traditional espionage activities against both government and industry, both their victimology and intent are shifting slightly to better align with a changing motivation.
  • Capabilities – Adversaries continue to value effective over all else to support their operations (this year macros and increasing evasion techniques)
  • Duller Objects” – I call them the “dull objects” because they’re not shiny and don’t usually get much attention.  Yet, they can pose as much a threat as any shiny object and act as bellwethers.

Victimology & Intent

US Industry Victims

bluehat_shifting_victimsIn September 2015, the US and China agreed that neither would “conduct or knowingly support theft of intellectual property.” Fireeye reported in “Redline Drawn” that there has been a significant draw down of US-targeted operations since at least 2014.  But, while there has been a decline, they could not solely attribute it to the agreement.  Similarly, Symantec reported that in 2016 a notable threat group previously operating heavily against US targets, Buckeye, ceased US operations and shifted strongly towards Hong Kong.

What we learned: While no public smoking gun exists that the US-China agreement had a direct effect on US victims, there is a clear shift in operations according to Fireeye and Symantec.  This illustrates that even after the long-standing “China hacking US” narrative driving a majority of targeted threat protection investment – things can shift.

Swift Attacks

swift_attack_slideIn February 2016 an adversary leveraged access into the Bangladesh Central Bank to defraud other banks of $81M – attempting fraudulent transactions over $1B.  Symantec correlated the attack to the Lazarus group based in the DPRK via malware analysis and similarity to that used to attack Sony Pictures.  SWIFT is the backbone of the global financial network, supporting over 6 billion messages/year between over 200 countries and territories.  However, thanks to a small typo by the adversary in the name of the non-profit supposedly benefiting from the transactions, an analyst caught the error and found the fraud allowing SWIFT and the banks to unravel the operation.  Later, analysts discovered further fraudulent activity possibly implying that either more than one group is active in the network or the Lazarus group had deeper or wider access than initially thought.

What we learned: If we trust Symantec’s analysis and attribution to the Lazarus Group, it presents a significant event using state-enabled cyber actors to conduct global financial fraud.  This could cause other financially struggling states to look to their cyber capabilities to enable financial fraud and increases the scope of victimology for some groups which fit this profile.

Summer of STRONTIUM

strontium_slide

Probably the biggest security story this year – and by far the biggest targeted threat ever in the public dialog: hacks into the Democratic National Committee (DNC), Democratic Congressional Campaign Commission (DCCC), World Anti-Doping Agency (WADA), and many others during a tumultuous year involving the US presidential election and the 2016 Olympics in Rio de Janeiro.

Traditionally, targeted threat activities like STRONTIUM exploit victims to gather intelligence and use it quietly avoiding all attention.  They rarely use the intelligence publicly.  In this case, they posted (at least some of) the intelligence publicly in large dumps.  However, this is not the first example of such activity.  Previous examples of this type of activity include dumping Sony Pictures emails and the Syrian Electronic Army (SEA) dumping Forbes’ databases.

This, alongside other activity including intrusions into state election offices made headlines and potentially affected US elections.

What we learned: STRONTIUM reinforced that the value of intelligence is not what you collect, but how you use it.  Like the Sony email dump, this likely points to a trend over time of dumping intelligence and not hording it.  Especially in cases where the greatest harm will come ultimately from disruption.  This will further reinforce the relationship between cyber activities and traditional information operations.  Read more about STRONTIUM.

Capabilities

Macros – Macros Everywhere

macros_slide

Relegated by many to the past – macros came back this year with a vengeance.  Reported by both Microsoft and Trend Micro, macro threats rose significantly during 2016.

This was not limited to commodity threats (such as ransomware) but also adopted by targeted threats as well.

Luckily, Office 2016 included new techniques for administrators and defenders to block macro attacks.

What we learned: I always teach my students, “what was old will be new again” in the threat landscape.  It is critical that professionals study the threats of old because they, or some variant, will likely return.  However, often our hubris tends to dismiss the threat because they’re not as “cool” as the newer techniques.  But, as we know – adversaries care less about newer techniques then they care about effectiveness.

Evasion

evasion_slideAnti-forensics and sandbox evasion are not new.  However, as the security industry has matured and caused the life-cycle of many threats to shorten – malware authors are increasingly using sandbox evasion throughout their entire kill chain.  Because network defense has had to increasingly rely on automated techniques (such as dynamic malware detonation) to protect networks because of the failure of static signatures – malicious capabilities naturally respond to the defensive environment to disrupt automated techniques.  Lastly, security researchers are increasing effective at discovering and uncovering operations and pose an equal harm to malicious activity.

For targeted threats, this includes not only the long-term Trojan or backdoor, but the entire operations chain through the first stage.  It is now common to encounter JavaScript reconnaissance profiling the victim with some back-end logic determining whether they’re “worthy” of receiving the dropper or later stages.  This victim profiling include the locale of the host, domain affiliation, hostname, etc.  Further techniques include just-in-time decryption and modularization to prevent the full capability from being captured.

What we learned: This reinforces our understanding that adversaries will always react to the security environment.  Elements which harm their operations, such as security researchers or automated analytics, will always be countered over time.

The “Duller Objects”

Platinum

platinum_slide

PLATINUM is a fascinating activity.  Their operations are like a text-book for targeted threat: zero-day usage, long-term persistence, novel DLL injection, time-limited operations to prevent discovery, anti-forensics techniques, political targeting, etc.

What we learned: Not only did we learn some fascinating new techniques about targeted threats to bolster our defenses, PLATINUM also taught us that a threat operating since at least 2009 can still be newly discovered in 2016.  This means that there are likely many more targeted threats operating than cataloged by threat intelligence analysts.

Targeted Threat As A Service

paytoplay_slide What if you’re an actor with little-to-no native capability?  Yet, you need intelligence collected from digital targets?  You go and buy it.  2016 saw some of the most advanced targeted threats as a service emerge.  Citizen Lab uncovered and documented an amazing history of targeting and exploitation against a human rights activists in the Middle East.  Particularly, this exploitation caused one of the largest security patches to Apple’s iOS using three separate vulnerabilities.  The group responsible is NSO Group from Israel.

Of course, this is not new – we’ve previously known about FinFisher and HackingTeam.  Their commercial hacking services target a range of victims.  Their hording of zero-days is also a target for others and a vulnerability for all – as seen in the HackingTeam dump.

What we learned: It is amazing how little coverage of these threats exist – yet the companies have public facing profiles and usually disclose their activities on their website.  More concerning, lesser capable states tend to be those with the worst human rights records and so use this commercial technology to gather intelligence on activists and others.  If we care about protecting the most vulnerable using our capabilities, these services would receive greater attention by the community.

Building Threat Hunting Strategies with the Diamond Model

Hunting cyber threats (especially those never seen previously) is the most expensive and difficult threat intelligence endeavor.  Hunting is a risk because you’re betting that there is something there to find – and that you can find it.  An effective hunter may come up empty most of the time.  Creating an effective threat hunting strategy ensures greater chances for a return on the investment.

An effective strategy includes answering four critical questions and employing the right approach to achieve the goal.  The Diamond Model identifies several “centered-approaches” enabling effective threat hunting.  Tying these approaches together creates the basis for a hunting strategy.  Without a strategy your chances of failure increase dramatically.

Hunting cyber threats is the most expensive and difficult threat intelligence endeavor.

Building a Hunting Strategy with the 4 Hunting Questions

Throwing out “I’m going threat hunting” is akin to saying, “I’m going fishing.”  Both are such vague phrases that they generally require a follow-up question: “For what?”  Some may answer “malware” or “lateral movement” the same as others answer “salmon” or “bass.”  The next question asked, naturally, “where?”  This leads us to the first critical element of a hunting strategy: answering the critical questions.

If you can’t answer these questions well.  You might as well go back to what you were doing because you’ll likely end up just wasting time and resources.  Hunting requires patience and discipline.  These four questions are the core of any hunting strategy.

The 4 Hunting Questions

There are four critical questions necessary to build a hunting strategy, and they’re best answered in this order:

  1. What are you hunting?
    • Hunting is expensive and risky.  You must narrow down exactly for which activity you are hunting.  Is it exploitation?  Is it lateral movement?  It is exfiltration?
  2. Where will you find it?
    • What you are hunting will determine where you will find the activity.  You must next narrow down the sources of telemetry which will provide visibility into the activity AND access to stored telemetry
  3. How will you find it?
    • Once you’ve identified what you’re looking and where you’ll likely find it, next you must identify the tools to hunt.  You don’t catch salmon and bass in the same way – you won’t catch exploitation and lateral movement in the same way.
  4. When will you find it?
    • Have a time bound for your hunting.  A never-ending chase will lead you nowhere.  Allot a specific amount of time necessary to achieve your goal, and if you come up empty at that time – move on to the next target.  If you have to feed your family, and you go out salmon fishing but catch nothing – it’s probably best to instead go after another fish or game before everyone dies of starvation 🙂  Likewise, management may likely lose patience with your hunting if you don’t deliver value.

From Strategy to Approach

Once you’ve answered the four critical hunting questions – you must then design the approach.  The approach not only describes the modes and methods of your hunting but, more importantly, addresses the “why.”  The “why” establishes your hypothesis.

Hunters must build and test many hypotheses at once.  Each failed hypothesis can lead to a failed hunt.  For instance, the hunter hypothesizes that they’re breached.  Why else would they be hunting?  Of course, if they’re not – the hunt fails.  The hunter hypothesizes the adversary leverages identities to move across assets.  So, this hypothesis leads the hunter to examine Active Directory logs.  Of course, if the adversary uses file shares they may not show up in AD – the hunt fails.

This step is critical because hunting is a big risk and cost.  And, establishing not just the “how” but also the “why” will help hunters critically examine their approach and look for other methods possibly overlooked.

When hunting adversaries you must always question your approach and look for more creative and effective methods.

The Diamond Model Centered Approaches

The Diamond Model establishes the event as the most basic element of any malicious activity and composed of four core features: the adversary, the victim, infrastructure, and capability.  All malicious activity contains these features (as established in Axiom 1).  Therefore, any hunting is ultimately based on these features and any hunting approach contains a mix of these “centered approaches.”

However, don’t consider these approaches in isolation.  Instead, a mix of approaches used in concert achieve greater coverage.

The Diamond Model of Intrusion Analysis. An event is shown illustrating the core features of every malicious activity: adversary, victim, capability, and infrastructure. The features are connected based on their underlying relationship.

The Diamond Model of Intrusion Analysis. An event is shown illustrating the core features of every malicious activity: adversary, victim, capability, and infrastructure. The features are connected based on their underlying relationship.

Named for the feature on which they’re based, the approaches are:

The Victim-Centered Approach

The news of several determined adversaries targeting a single human rights activist is an excellent example of the victim-centered approach.  A victim-centric approach uses the victim as the central element for hunting and looks to illuminate the other Diamond-connected features (i.e., capabilities, infrastructure, adversaries).  The victim-centric hunt is equivalent to a “honeypot.”

Network defenders will most likely focus on the victim-centered approach.  It provides the greatest benefit and easiest approach with the highest likelihood of actionable results.  There are many modes and methods provided by this approach.  Chris Gerritz (@gerritzc) details several victim-centered approach modes and methods in his post: Approaches to Threat Hunting.

Advantages: catches many adversaries, many hunting opportunities (e.g., network attacks, malicious email delivery, etc.), easily obtained data (usually)

Disadvantages: possible overwhelming amount of malicious activity, too many hunting opportunities can dilute an undisciplined hunting effort

Tips: focus hunt on a single phase of the kill-chain at a time

See Diamond Model Section 7.1.1

An Example Victim-Centered Hunting Strategy

[Why] We hypothesize that several adversaries target a specific victim.

[Why] We further hypothesize that adversaries deliver their capabilities via email (as most do).

[Why] Our hypothesis is strengthened through data that most attacks are delivered via email and our organization has previously received email-borne delivery attacks.

[What] Our hunting goal: collect intelligence on adversary attacks in the email delivery phase.

[Where & How] Therefore, our victim-centered hunting approach includes gaining visibility into the victim email and apply tools which illuminate likely malicious elements (links, attachments).  Our primary method will involve detonating attachments and hyperlinks.  Our secondary method will involve sender-receiver graph analysis and header inconsistencies.

[When] We will apply this approach and methodology for 2 weeks after achieving access to data.

This hunting strategy reveals:

  • Capabilities: the tools and techniques used by an adversary to compromise and operate against a victim (e.g., in our example: the malicious attachments)
  • Infrastructure: the logical and physical elements necessary to manage capabilities (e.g., in our example: the email source, malicious attachment C2, URLs)

The Infrastructure-Centered Approach

While network defenders will generally take the victim-centered approach.  That’s not the only hunting approach available.  The infrastructure-centered approach enables hunters to identify malicious infrastructure and possibly pivot to identify capabilities, victims, and more infrastructure.  Most importantly, because generally infrastructure must operational before capabilities and victims connect – new infrastructure can provide preemptive defense.

There are several methods to leverage this approach.  Depending on access and visibility some are easier than others.  For instance, one method is to monitor domain name servers known to host malicious domains.  Another may be to monitor all new domain registrations for a known pattern used by an adversary.

Another popular method is SSL certificate chaining.  PassiveTotal has written a post, “Harnessing SSL Certificates Using Infrastructure Chaining” detailing the method.  Mark Parsons (@markpars0ns) has a great presentation on “Hunting Threat Actors with TLS Certificates.”

Lastly, and one of the most difficult is direct observation of malicious infrastructure.  This could be done through a service provider – or via infrastructure take-over (such as a sinkhole).  Through this method, significant intelligence can be gained including: capabilities used through the infrastructure, victims contacting the infrastructure, and potentially other related infrastructure.

Don’t forget about the opportunities to use the Diamond Model to chain multiple approaches together.  For example, after discovering new infrastructure an analyst is able to pivot an ask for additional information about Diamond-connected features, such as capabilities.  This might be through pivoting across a malware zoo like Virus Total for any reference to the infrastructure.

Advantages: Good tools exist to support the approach (PassiveTotal), finding infrastructure prior to operational use provides preemptive defense

Disadvantages: Limited data access, findings not relevant to many organizations

Tips: Data, Data, More Data

See more in the Diamond Model Section 7.1.3

Example Infrastructure-Centered Hunting Strategy

[Why] We hypothesize adversaries establish infrastructure prior to operations

[Why] We hypothesize adversary X continues to structure their domains using the pattern badstuff-<victimname>.com

[Why] We hypothesize adversary X continues to use the name server baddomains.com to host their infrastructure and the same

[What] Our hunting goal: monitoring the name server for new names matching the pattern we may find new names prior to their operations providing proactive defense.  Further, because the adversary uses the victim name in their domain we will likely identify victims.

[Where] The baddomains.com name server

[How] Monitor the baddomains.com name server by querying the server every morning for all domains and identifying the domains not seen the previous day.  Further, looking for any domains on the server with the known pattern.

[When] We will leverage this strategy for 1 month to provide for any dips in adversary activity during that period

The Capability-Centered Approach

Aside from the victim-centered approach employed by most network defenders, the capability-centered approach is the second-most popular.  This is largely due to the broad accessibility of a massive malware zoo – VirusTotal.  If VirusTotal didn’t exist, this approach would likely be limited to only anti-virus vendors and others with potentially large collections of malicious binaries.

The capability-centered approach focuses on discovering intelligence from adversary tools – namely “malware” (but the category is larger than malware and includes legitimate tools used illegitimately).  The most advanced hunters using this approach take advantage of the VirusTotal retrohunt feature enabling analysts to run YARA rules over the VirusTotal zoo looking for lesser known samples.

Advantages: easy access to large malware library (VirusTotal), easily written analytics (YARA)

Disadvantages: without your own malware zoo – limited to VirusTotal features

Tips: take advantage of VirusTotal

See more in the Diamond Model Section 7.1.2

Example Capability-Centered Hunting Strategy

[Why] We hypothesize that network defenders share adversary capabilities via VirusTotal

[Why] We hypothesize that we can identify unique malware via a malware zoo using static analysis

[What] Our hunting goal: find undiscovered malware and its associated command and control (C2) channel to feed host- and network-based detection to enhance protection

[Where] VirusTotal

[How] Author and execute YARA rules over the VirusTotal data and monitor the zoo daily for new samples meeting our criteria

[When] We will author and improve rules for 2 weeks and run them perpetually

The Adversary-Centered Approach

The adversary-centered approach focuses on visibility on the adversary themselves – meaning few organizations have the requisite visibility.  Usually, limited to only service providers and those with extraordinary capabilities.  However, achieving visibility directly on the adversary themselves generally provides tremendous, almost perfect, insight.  This includes infrastructure creation and management, capabilities (sometimes those in development), attribution details, and at times victim information.

However, others may access some methods within this approach.  For instance, knowing an adversary persona may allow an analyst to leverage open source intelligence (OSINT) to track the persona across sites potentially gaining insight into operations.  Further, an analyst may leverage adversary operations security (OPSEC) mistakes to achieve attribution based on their persona.  ThreatConnect’s CameraShy work illustrates the adversary-centered approach to achieve attribution through persona development and tracking.

However, while this approach leads to “newsworthy” items regarding attribution – their direct application to network defense is limited.  Therefore, generally only those with a vested interested in attribution leverage this approach.

Advantages: possible adversary attribution, deeper visibility into

Disadvantages: the most difficult approach requiring significant (and sometimes extraordinary) visibility or adversary mistakes, does not generally result in actionable intelligence, adversary “false flag” mis-attribution may trip up undisciplined analysts

Tips: leverage OSINT and pray for a mistake 🙂

See more in the Diamond Model Section 7.1.4

Example Adversary-Centered Hunting Strategy

[Why] We hypothesize adversaries use personas to register malicious domain names

[Why] We hypothesize that some of these domain registration aliases relate to real people

[Why] We hypothesize that adversaries have mistakenly tied their operational alias to their real personas revealing their personal details

[What] Our hunting goal: uncover the real people behind malicious domains providing attribution

[Where] Domain registration data and other open sources

[How] Take 500 known malicious domains, extract aliases details from their registration, pivot across open sources looking for correlation with real people

[When] Spend 3 days collating known malicious domains, 2 weeks pivoting across open sources

Keeping up with the Stream: How I Maintain External Situational Awareness

In any field related to intelligence and security it is critical to stay abreast with external news and developments.  But, your time is a zero-sum game and all security and intelligence analysts must balance their time “reading the news” (consuming news from others) with “creating the news” (generating new intelligence and insight for others) – this is how I view my work time strategically.  Building tools and techniques to more efficiently “read the news” allows you to spend more time “creating the news.”  So it is no surprise that I get asked regularly what I do to stay connected with the world and the community.  Here is my answer, for my particular situation and need.  Mileage will vary.

For me, the key is to take advantage of curated news/information streams instead of curating it myself.  However, just like relying on any one news source, relying on one or a few curators for your news will quickly introduce you to the bias of the curators themselves.  Therefore, I don’t rely entirely on this method and also self-curate to a small extent to lower that risk.

I organize my professional reading into three categories: world, profession (computer science/security/analysis/data science), and discipline (threat intelligence).  Usually, I begin by reading the world news, followed by threat intelligence, and lastly information I need about my profession.  I feel that this appropriately prioritizes my time and gives me the best perspective to solve problems throughout the day.

Here is my particular strategy:

  1. I begin with the top stories on Google News and then to the Economist.  I then browse the front page of Reddit.  Together this gives me a healthy sense of major events in the larger world.  This is critical because my discipline is heavily influenced by larger world events.  However, within this set I also focus my time reading articles which have direct impact on areas of world my daily work touches.
  2. I read curated security and intelligence emails: Team Cymru Dragon News Bytes; SANS NewsBites (weekly); and two others which come from paid services via my employer.
  3. Twitter.  I use key hashtags and user lists to pare down the stream to a consumable chunk.  This is very much an art form and I’ve yet to feel a mastery.
  4. RSS Feeds.  I use Feedly to curate my RSS feeds.  However, over time I’ve found that my other strategies tend to surface most of the gems from the feeds.
  5. If I have time, I’ll then use a financial news site to browse the news about my company as well as major players in cyber security to maintain awareness about the larger business pressures and events which may impact my work.
  6. Return to Twitter.  About 2-3 times/day I’ll return to Twitter to scroll through tweets by key hashtags and user lists to make sure I find anything critical right away.

The Long & Important Ones

About once-per-day I find a white paper or article on which I want to focus and absorb.  For those, I print them out (yes, on paper) and read them later with a pen in my hand so that I practice Active Reading; making marks, underlining, and making comments which help me absorb the material and create an internal conversation.  I find this a highly enjoyable activity which stimulates creativity and engagement helping to foster new ideas.

How do you maintain your external situational awareness?  Please comment below or tweet @cnoanalysis

5 Cyber Security Predictions for 2013

2013 Crystal Ball2012 has been an interesting year with a growth in our understanding of our adversaries and some high-profile international security incidents.  2013 will continue to impress, but differently.  It will ultimately be a year of strategic growth.

 

Here are 5 cyber security predictions for 2013.

 

 

1. There will be little change to the threat landscape

There will be little change to the threat landscape in 2013 as our adversaries are already achieving their intent (extrapolating the size and scale of currently known adversary operations) and therefore have little pressure to change.  However, I do not see this as holding into 2014 as greater innovation in the threat intelligence and mitigation space is made (prediction #4) and the role of government is better defined (#5).

 

2. Cyber attacks will have a greater impact to a greater number

As data and service providers co-locate in cloud environments, attacks on the infrastructure providing these services will rise (attackers will always go to where the data lives) resulting in greater collateral damage to non-intended victims simply based on with whom they are co-located.

 

2.1 Corollary: Risks will be more difficult to assess as control of the location of data and an accurate knowledge of the infrastructure is lost in the cloud.  This will cause businesses to continue to mismanage public and customer relations when incidents occur.

 

3. The cost of cyber threats will grow and there will be an increased awareness and visibility of those costs resulting in greater effective action in the mid-and-long term.

Based on prediction #3 the cost of cyber threats to all organizations will grow.  However, as has been the trend, visibility of security issues and incidents will rise forcing business change to address this challenge in new ways (hence prediction #5).   Innovation will then lead to greater effective action in the mid-and-long term.

 

4. The role of government in securing computer systems from domestic and foreign cyber threats will continue to be muddled.

The role of government in any area is generally slow to evolve.  Cyber security has not been any different.  As governments around the world are consumed by domestic and international economic affairs, little attention will be focused on this problem further delaying necessary action.

 

5. Private industry, vice government or research, will make great innovations in the threat intelligence and mitigation space.

Based on: (1) the amount of venture capital flowing into cyber security industry to produce innovations in threat intelligence and mitigation, (2) the market growth for such innovations (based on predictions #2 & #3), and (3) with the growth in funding means the ability for private industry to recruit and retain the best talent in the field — it is no great stretch of the imagination to see that this is where the innovations necessary to combat the threat and increase risk and cost on the adversary will originate during 2013 changing the threat landscape in 2014 and beyond.

A Hacker's Hierarchy of Operational Needs

A Hacker’s Hierarchy of Needs

A Hacker's Hierarchy of Operational Needs

A Hacker’s Hierarchy of Operational Needs

Maslow's Hierarchy of Needs

Maslow’s Hierarchy of Needs

All humans have a basic set of needs which they work to satisfy – as described by Maslow in his seminal, “A Theory of Human Motivation.”  Maslow did not create a true hierarchy.  He describes how there are sometime competing and/or complementary needs.  Instead of a strict hierarchy, these needs form dominating preferences or priorities.

It made me question whether there was a cyber operational equivalent: a set of hierarchical needs or requirements necessary for the adversary/hacker to meet their goal.  Like Maslow, I do not believe that this hierarchy is necessarily serial in nature but rather inform priorities and dominate preferences.  Nor do I believe that they must necessarily be satisfied in order, serially.

For instance, a hacker may create a capability and then sell that capability, or their skills to use the capability, to an organization thereby gaining funding for the rest of the operation.  However, while the capability was the first achieved in the chain, it was a vehicle to achieve a more base need: funding.

 


Basic Necessities: Obviously those things which allow a person to live and work effectively

Funding: Even the most basic funding is required for equipment (computer(s)) and/or purchasing other things like connectivity to the Internet and the like.

Connectivity: A hacker must be connected to a network to which s/he can reach potential targets

Target Vulnerabilities: A hacker must have a set of vulnerabilities and exposure upon which they can exploit to achieve their goals

Capabilities/Infrastructure: I believe these are both equally important but both are a requirement for operations – the capability to achieve their effect, and the infrastructure to deliver the capabilities to the target victims

Targets: A hacker must have a one or more targets of which they can use to achieve their intent

Access: A hacker must have access to the target to achieve any effects and ultimately achieve a positive outcome

Outcome: The successful exploitation, attack, etc. of which was the entire intent of the hacker

Reward: The reward for their successful operation (fame, fortune, notoriety, etc.)


So, what do you think?  Do they map to your understanding of the hierarchy for the operational needs of a hacker? How would you use this model?

 

United We Stand, Divided We Are Falling: Are Security Collectives a More Effective Model?

Anti-virus is a multi-billion dollar industry and is a necessary best-practice and due dilligence measure everyone must implement.  Yet it is failing, by its own volition, spectacularly – and only delivering the least-common-denominator prevention without any signs of improvement.  How can consumers of products, namely security products, guarantee better service and quality from the vendors?  By forming security collectives and achieving economies of scale in security.

NATO: An Example of Collective Security

Mahatma Ghandi was a strong man, but it took the collective work of thousands making salt during the Salt March to make a statement.  One person in need of high quality private insurance in the U.S. does not make a market, but millions coming together into health groups can leverage their collective purchasing power to deliver the necessary health plan at the right price.  Political parties band like-minded people together to set policy and government agendas.  Governments come together for collective benefit (e.g. NATO).

Collectives can make markets, demand services, set prices, deliver effective products, and change industries.  They can leverage their group funding and influence to get what they need where individually they cannot.

One of the most difficult aspects of achieving information assurance and network security is that we rely, almost exclusively, on third-party vendors and service providers to achieve our goals – to deliver the technology, to block the right packets, to prevent the wrong applications from installation, to write secure code for their applications, etc.  These come in the form of application vendors, anti-virus vendors, intrusion detection/prevention platforms, firewall systems, network infrastructure, etc.

Yet these vendors are simply delivering the least-common-denominator in security.  They are building technology and leveraging intelligence only enough to solve the problems for the greatest number – not solving the right problems for their customers.

Mikko Hypponen, recently openly stated that they fail at their job.

“The truth is, consumer-grade antivirus products can’t protect against targeted malware created by well-resourced nation-states with bulging budgets. They can protect you against run-of-the-mill malware: banking trojans, keystroke loggers and e-mail worms.” — Mikko Hyponen in Why Antivirus Companies Like Mine Failed to Catch Flame and Stuxnet appearing in Wired June 2012

It is rare that a company, or an entire industry, will admit their faults publicly.  What Mikko says is true.  Anti-virus is designed to protect their customers from the most common threats – but fail at protecting their customers against the most advanced threats.

How then can customers achieve the assurance necessary to operate when the products they rely on most to protect them cannot?

Today: we define our security objectives alone, individually contract with security vendors for products (e.g. anti-virus, intrusion detection systems), and then conduct incident response and intrusion analysis in a vacuum.

What if organizations were to band-together with like-mission partners into security collectives (military with military, critical infrastructure with critical infrastructure, healthcare with healthcare, etc.).  Thereby achieving economies of scale in shared threat intelligence and greater vendor support through collective bargaining.

Is the anti-virus product not delivering the protection you (and your partners in the group) need?  Then take all of your money elsewhere to a vendor that will offer the level of service necessary.  Maybe you need a hunting service for the most advanced adversary.  Maybe you need advanced threat intelligence utilization.  Maybe you need just enough support to keep the next worm from infecting your network.

With security collectives, all customers need not be satisfied with the least-common-denominator of security.

Money talks, and big money talks even louder.  Threat intelligence is nice, shared threat intelligence is stronger.  We need to band together into collectives to make our voices heard and demand better service based on our respective missions.

Protect All Information Completely? Expect Exploitation Instead

Packet Pushers recently published an opinion titled “Pill-Chomping Hackers and Security Whack-a-Mole.”  There are several very good points.

All information about a target is a potential vulnerability

Information is helplessly entangled and one piece of innocuous information can lead to other pieces of critical information

Information is only as secure as those protecting it

There is one point worth re-iterating: when you share your data (whether it is your social security number at a medical office or your credit card number at a restaurant) it is only as secure as the security of those holding it.  In essence, both organizational and personal security must expand the boundaries to include anywhere their information is held.

However, there is one point I would like to argue, the implication that all data must be secured because it is a vulnerability.  It is not possible to protect all data equally.  A data owner must place different values on different datum and protected it appropriately.

Second, hiding all of your most critical data using in the most secure method still does guarantee security. Instead of attempting to build the best security controls and assume they work, it is better to protect your data as well as possible and then assume you will be exploited.

Don’t just protect the data, one must watch for signs of exploitation and prevent further exploitation.  In the case of a social security number in the real-world, if one assumes the theft and misuse of the number then it is best to watch for further misuse (e.g. unauthorized new lines of credit being opened, activity on credit cards, etc.).

Furthermore, reduce loss.  If at all possible, make sure that any compromise is as insignificant as possible.  In the real-world, it is best to reduce password re-use so that if a password to one application or website is compromised, not all of your passwords have been compromised.

Yes, protect your data as best as you can, but assume it adversaries are out to exploit you – and they will be successful.

A New Security Accounting or How to Win Against a Formidable Adversary

Many intrusion analysts are constantly plagued by a nagging thought that we are fighting a losing battle.  The problem only gets worse, it never seems to get better.  There are only more hackers, more damage, more vulnerabilities, more exploits, more toolkits, etc.  Everyday we feel overwhelmed and under-resourced.

This feeling is not wrong.  Our instinct is correct.  We are fighting a losing battle.  There are many more adversaries than there are network defenders.  The adversary needs only one vulnerability, one exposure, or one exploit to win – while we need to find and patch all the vulnerabilities and exposures and prevent all exploits to just stay even.  We have already lost before even playing the game.

To win this battle, or bring it to a draw, we must initiate a new security accounting.  We must change our thinking.

First, we must accept loss.  We must understand that we will be penetrated and exploited.  We must focus on early detection, discovery, and the minimization of loss/mitigation.  We must not count every intrusion as a failure.  This is a game to be played over decades, not days.

Second, we must be truthful with ourselves and then truthful with others.  No more counting scans detected by the firewall as “millions of blocked intrusions.”

Third, we must stop accounting for security in terms of money/resources we have spent to secure ourselves.  It is a self-centered and foolish accounting.  We must start focusing on how much did we force the adversary to spend in money/resources to exploit our network – what was their $ per Megabyte of data stolen.  The larger we make that ratio the more secure we become: (1) we will reduce the number of adversaries operating against us because only the most resourced will be able to gain any profit from their operations, (2) we will reduce the effectiveness of the adversaries which do operate against us by increasing their costs and decreasing their gains.

Some may say that this is a losing proposition.  What about the adversary willing to spend $10 million to exploit my network and steal my intellectual property, but I can only spend $1 million to protect it?  You’re screwed.  The adversary obviously values your data more than you.  The only hope is to band together with other targets/victims to combine your forces in the hopes of creating parity with the adversary.

An analogy: if one country is willing to spend billions to create a military to defeat another country, and the target country cannot even spend millions in defense – they will likely lose.  Their only hope is to create an alliance with other countries in the hope of (1) creating an effective combined force to battle their adversary or (2) being able to pull other handles (e.g. trade/economics/etc) costing the hostile country enough to make the attack worthless.

In the end, it comes down to a relationship built on value.  As long as the adversary is making a profit (however that is defined) there is no incentive for them to stop.

There are two types of victims: victims of opportunity and victims of interest.

Victims of opportunity are victims because they were available to the adversary at the right time but possess little value.  If the adversary was to lose access they would likely not notice.  These organizations can utilize standard security practices to protect themselves reducing their likelihood of becoming a victim.  Example: a home computer infected with a botnet.

Victims of interest are victims because they possess great value to the adversary.  If the adversary were to lose access to the victim it would be noticed, and the adversary would spend resources regaining access and maintaining that access.  The adversary will not stop victimizing the organization until the relationship between adversary and victim changes and the victim no longer provides enough benefit to justify the cost of exploitation.  Example: Advanced Persistent Threats.

Therefore, a security strategy must be based on the adversary/victim relationship.  The only way to win against a formidable adversary, one in a considerably better position than yourself, is to make it too costly for them to wage war.  (NOTE: the cost will be different for each adversary, some may be sensitive to finance while others might be sensitive to jail/loss of freedom, etc.)

Powered by WordPress & Theme by Anders Norén