Always A Bad Day For Adversaries

Category: Best Practices

The Cost of Bad Threat Intelligence

There is no doubt that threat intelligence is now “a thing.” At RSA 2015 I couldn’t help but notice how many vendor booths were hawking their relevance to threat intelligence.  I hear about a threat intelligence start-up almost weekly.  That is not surprising given venture capital is flowing and C-suite customers are now investing in “threat intelligence.”  Everyone wants a piece of the pie.

While market growth for threat intelligence produces innovations it also produces negative by-products (welcome to capitalism).  The most concerning by-product is the reduction in threat intelligence quality.

A growing number of published threat intelligence reports contain inaccuracies and poor analysis.  A growing number of indicators across a variety of producers are either stale, irrelevant, or generate so many false positives to be useless.

What so many fail to realize is the cost of poor quality intelligence.  Here are some of the costs:

  • If a single threat intelligence-sourced alert generates $1000 worth of time to investigate a false positive, it is easy to see how that relatively small amount can multiple within an organization and across enterprises worldwide.
  • If an intelligence producer reports incorrectly categorizes a threat as APT (say instead of cyber crime) an organization’s security response to the threat will be (and should be) different likely involving a deeper investigation.  Again, this additional, and likely unnecessarily deep, investigation is costly in both time and resources.
  • Every poor quality report costs time to read and digest.  Time that could be spent understanding a high-quality report.
  • Every poor association or correlation derails an analytic effort at an organization.

Because organizational security resources are finite and already stretched thin these mistakes, errors, and poor practices consume critical resources which could be spent on other problems and reduces the security of an organization.

Two market elements have caused this quality reduction:

  • A need to garner attention in the growing cacophony of the threat intelligence market feeding a “first to publish” mentality which usually results in a “rush to publish.”
  • A lack of customer education resulting in a poor evaluation of providers thereby incentivizing the wrong aspects of threat intelligence – such as volume of indicators over their quality or relevance

Obviously, only threat intelligence providers can solve the problem, but what pressures can help drive effective change?  Here are some:

  • Threat intelligence customers armed with evaluation criteria (particularly quality metrics) which helps them leverage threat intelligence effectively without generating unnecessary costs – this will help create market drivers for higher quality
  • Industry must self-police bad intelligence by being honest with ourselves and each other.
  • Threat intelligence aggregation platforms should have quality assessment capabilities informing the intelligence consumer of potential problems (likewise they are also be in a position to highlight timely, relevant, and unique intelligence of great value)
  • Threat intelligence analysts trained in analytic tradecraft stressing quality and accepting an ethical duty

Security professionals practicing threat intelligence must understand the implications of mistakes and poor analysis.  Bad intelligence can and does decrease the security effectiveness of an organization. Therefore it is an ethical duty of the threat intelligence practitioner to reduce errors. Threat intelligence is difficult – intelligence by definition attempts to illuminate the unknown and works by making judgments with imperfect data – errors are natural to the domain.  But, with proper practices and procedures bad intelligence can, and must, be minimized.

15 Things Wrong with Today’s Threat Intelligence Reporting

What I think when I read most threat intelligence reporting

What I think when I read most threat intelligence reporting

As I have written before, intrusion analysis is equal parts knowing the technical elements of an intrusion and being an analyst.  However, most in this domain spend an inordinate amount of time studying technical details compared to honing any analytic skills.

How long has it been since you’ve taken a highly technical course?  (probably within the last year or two)  How about an analysis course?  (probably in the last 5 years, 10 years, never?)

I read several threat intelligence reports daily.  It is painfully obvious how the lack of analytic skill is harming the discipline. Many folks come from technical degree backgrounds and analyze packets and binaries well enough but can’t seem to tell the difference between inductive, deductive, or abductive reasoning.  Furthermore, their managers and mentors never recognize a problem, they just send them to more technical courses.

What is the risk?  Threat intelligence provides insight and context to improve decision making.  The risk of bad intelligence is high. Bad decisions can easily be made from poor intelligence – potentially doing more harm than good.  Good analytic practices improve analysis thereby decreasing the risk of poor intelligence.  You could have the best packet analysis skills in the world, but if you cannot communicate your conclusions effectively to those who need to act on your information those skills are effectively useless in threat intelligence.

We need to do better.  I started this post about a month ago and wrote down a “lesson” whenever I saw an example of poor analysis.  Needless to say, I saw some of these several times.  (Contrary to the recommendation of others, I will not cite/quote specific examples – I believe that would only name and shame others)

Trend – the word actually means something

How many times per week must I read about a new “trend” from threat intelligence?  One or two events does not constitute a trend.  Even three or more events, depending on the universe of events, may not constitute a trend.  Trends are serious.  True trends in adversary activity and methodologies inferred by threat intelligence should drive data collection, analytic tradecraft, and defensive decisions.  Before you start throwing out the word trend just because you’ve seen something a few times, consider the millions of other events you’re not seeing and consider if they’re just an anomaly rather than a trend.

Analysts and conclusions are like horses: sometimes you need to lead them to water

In many cases I can follow the logical progression of hypotheses and facts to the conclusion.  In some cases I cannot.  Either because an analyst failed to include the appropriate evidence/fact on which now an assumption must rest or because of convoluted logical reasoning.  Ensure evidence supports your conclusions and the logical reasoning is clear.  Don’t assume that what is clear in your mind will be clear in mine.

You can’t be completely confident all of the time – use words of estimative probability

Do you know how often I see the effective use of estimative probability in recent threat intelligence reporting?  Almost never.  This is a problem.  Not everything presented is irrefutable fact; in fact, a good analysis will have a proper mix of data/fact, hypotheses and conclusions.  The confidence values of these conclusions vary.  When you don’t effectively apply estimative probability and variable measures of confidence it removes value from the analysis and increases the risk of poor decision making by consumers.  First, if you don’t know what estimative probability is, LEARN about it.  Then learn how and when to apply it properly. Importantly, also know what words/phrases to avoid (i.e. weasel words).

Never be afraid to include contrary evidence

Do you know how many times I saw evidence contrary to the conclusion presented in a threat intelligence report this month?  Never.  Practice analytic honesty.  If there is exculpatory evidence, contrary evidence, or an alternative hypothesis – show it.  As long as you’re following some of the other lessons here (e.g., separating fact and hypothesis, using words of estimative probability) it will strengthen your analysis and provide more value to the consumer.

Just because you’ve seen something for the first time doesn’t mean it’s the first time it happened

We all love finding something awesome and telling the world.  It’s cool because we all want to know what you’ve found!  But, please don’t assume it is the first time it has happened or even the first time it has been seen.  Having confidence is critical, but hubris is deadly to analysis.

Don’t operate on an island

You are not alone!  Don’t act like it.  Share and consume, enrich and enhance.  Go ahead and build on the analysis of others (citing appropriately).  Whatever your observation point or data sources, they’re not omnipresent.  I rarely see analysis reference other (obviously) related pieces.  How is that?  The power of defenders lies in our community and our ability to work together against an adversary.

Be bold, but don’t be stupid

I like my analysis like I like my coffee: bold.  But, there is a line between taking facts to their logical conclusion and taking facts to crazy-land.  The difference is logic.  Ensure your conclusions and hypotheses follow logically from the facts through induction, deduction, or abduction.  If your conclusions cannot be logically traced or tested, then they’re likely living in crazy-land.

Don’t mix hypotheses, conclusions, and facts

Hypotheses, conclusions, and facts are not the same.  Your intelligence reports should not treat them as such.  Ensure that your readers can effectively separate the three through your use of language, formatting, etc.  When the three are confused it can lead to erroneous assumptions by consumers and lead to decisions made on weak conclusions rather than facts.

Save hyperbole for the glossy promotion material

Hyperbole has its place.  It doesn’t have a place in threat intelligence.  Save that for the glossies.  Be precise, honest, and accurate.  Don’t embellish or exaggerate.  Trust me when I say we have enough people running around like chickens with their heads cut off in this field.

Logical fallacies are just that, get to know them

Enough said.  I’m sorry I have to say this, but please understand the differences and applicability of deductive, inductive, and abductive reasoning BEFORE writing your first threat intelligence report.  Or, at the very least, have an editor who knows the difference.

Don’t create new words when existing words suffice

I’m not going to name-call here.  You know who you are.  There are times when words/phrases have multiple meanings.  I understand that.  But, aside from that….stop it.

Tell a story!

Your analysis is a story.  You’re effectively documenting history – studying the past – in the hopes of making conclusions and judgments which will help the present and future.  While you are documenting the activity of computers you are ultimately describing the actions caused by adversaries.  Just like any story your report should have a beginning, middle, and an end.

Answer my questions

Write as if you are in a conversation.  Think about what somebody else may ask of what you’re saying, and address those questions in the text.  Any questions left unanswered have the ability to form into assumptions on the part of the consumer/customer.  If you don’t have an answer, feel free to write: no further information.

Be concise, be accurate

Practice analytic honesty and respect the time of your reader.  The report you’re considering may actually need to be three different reports – one describing all of the malware reverse engineering, one describing all of the network activity, and another describing the threat itself which references the other report.  Putting everything in one report does not make it more consumable, it makes it less consumable and allows analysts to muddle up various lines of analysis.

Describe diagrams, charts, and tables both in the narrative text but also in a caption

This is just a pet-peeve of mine, but one which I find increases the readability of threat intelligence reports.  Make sure you describe your diagrams, charts, and tables in both the narrative text (as part of the story) and also in a caption.  I find this necessary because as I move backwards and forwards through a report reading and re-reading, forming and re-forming logical chains, I don’t want to hunt for the description in the text every time.  I also don’t want to jump to the caption in the middle of text if not necessary which breaks my concentration.

5 Intrusion Analysis Ideas in 10 Minutes

Here are 5 cyber security ideas to improve your analysis and understanding which will take no more than 10 minutes of your time.

1. Inspect all events with a sliding scale – Good, Suspicious, Bad

An analytic mind-set should move as evidence is uncovered

One of the easiest, and worst, mistakes an analyst can make is to be too firm in their judgement.  I train analysts, and myself, to use a freely sliding scale when inspecting events, packets, and binaries.  This scale moves between known good, suspicious, and known bad as uncovered evidence supports a “goodness” or “badness” final judgement.

It is natural to come to premature conclusions when analyzing data.  Many preach against this.  I have never known a perfectly objective human.  This discounts our naturally occurring and helpful ability to make quick judgments and drive our desire for more data and evidence.  Instead, we should preach against the analyst who is hasty in a final judgement and unwilling to accept and synthesize new evidence in either direction.

2. Be willing to accept suspicious

There will be many times when after hours or days of work and collaboration the best judgement is that the event, packet, log entry, or binary, etc. is still not known as either “good” or “bad.”  An analyst must be willing to accept this unknown middle ground of “suspicious” where final judgement is elusive.  There will be times when there is not enough evidence either way nor is it likely more evidence will be uncovered (e.g. that purged critical log file, the company will not provide a missing piece of information, etc.).  Be willing to accept suspicious as an answer and reject the pressure to render a final judgement of good or bad.

However, it is important that an analyst is willing to render an informed judgement to decision makers as to where, on the scale, the event lies and what evidence supports that judgement – and more importantly, what evidence supports a contrary judgement.

3. Goodness Outweighs Badness

Some of the best cyber security analysts I have known have been network engineers and system administrators – those that best understand how systems and users actually work rather than relying on the hypothetical or documentation.  This is because the majority of network activity is good/valid versus bad.

The most valuable skill an intrusion analyst can have is to quickly and accurately identify valid activity and separate the non-valid/malicious/bad activity from the pile.  My number one recommendation to upcoming intrusion analysts is not just focus on courses and materials which teach intrusion techniques (e.g. SANS) but to spend an equal amount of time on the RFC‘s and other training opportunities which teach the valid operation and administration of modern networks and hosts.

4. Counter-Intelligence is our closest domain partner

Of all the domains I have studied to further my exploration of intrusion analysis and cyber security it is counter-intelligence which I have found to offer the most insight and parallels to this domain.  Others may argue with this but counter-intelligence works in a domain where there is an assumed compromised environment and the focus is primarily on detection and damage limitation when compromise occurs.

Of course, counter-intelligence necessarily breeds paranoia – but that is also a good quality in an intrusion analyst, when balanced with the right amount sanity 🙂

5. Document everything and don’t get lost in the “rabbit hole”

In the pursuit of an activity with the gathering of evidence and shifting judgments and hypotheses, things can move quickly.  When conducting intrusion activity, document everything – even if it seems irrelevant – you never know when a case will hinge on a small, originally, over-looked detail.   In this documentation also record all open questions and hypotheses so that when “going doing the rabbit hole” of analysis towards one hypothesis other lines of analysis are not forgotten or discounted without effective evidence gathering.

United We Stand, Divided We Are Falling: Are Security Collectives a More Effective Model?

Anti-virus is a multi-billion dollar industry and is a necessary best-practice and due dilligence measure everyone must implement.  Yet it is failing, by its own volition, spectacularly – and only delivering the least-common-denominator prevention without any signs of improvement.  How can consumers of products, namely security products, guarantee better service and quality from the vendors?  By forming security collectives and achieving economies of scale in security.

NATO: An Example of Collective Security

Mahatma Ghandi was a strong man, but it took the collective work of thousands making salt during the Salt March to make a statement.  One person in need of high quality private insurance in the U.S. does not make a market, but millions coming together into health groups can leverage their collective purchasing power to deliver the necessary health plan at the right price.  Political parties band like-minded people together to set policy and government agendas.  Governments come together for collective benefit (e.g. NATO).

Collectives can make markets, demand services, set prices, deliver effective products, and change industries.  They can leverage their group funding and influence to get what they need where individually they cannot.

One of the most difficult aspects of achieving information assurance and network security is that we rely, almost exclusively, on third-party vendors and service providers to achieve our goals – to deliver the technology, to block the right packets, to prevent the wrong applications from installation, to write secure code for their applications, etc.  These come in the form of application vendors, anti-virus vendors, intrusion detection/prevention platforms, firewall systems, network infrastructure, etc.

Yet these vendors are simply delivering the least-common-denominator in security.  They are building technology and leveraging intelligence only enough to solve the problems for the greatest number – not solving the right problems for their customers.

Mikko Hypponen, recently openly stated that they fail at their job.

“The truth is, consumer-grade antivirus products can’t protect against targeted malware created by well-resourced nation-states with bulging budgets. They can protect you against run-of-the-mill malware: banking trojans, keystroke loggers and e-mail worms.” — Mikko Hyponen in Why Antivirus Companies Like Mine Failed to Catch Flame and Stuxnet appearing in Wired June 2012

It is rare that a company, or an entire industry, will admit their faults publicly.  What Mikko says is true.  Anti-virus is designed to protect their customers from the most common threats – but fail at protecting their customers against the most advanced threats.

How then can customers achieve the assurance necessary to operate when the products they rely on most to protect them cannot?

Today: we define our security objectives alone, individually contract with security vendors for products (e.g. anti-virus, intrusion detection systems), and then conduct incident response and intrusion analysis in a vacuum.

What if organizations were to band-together with like-mission partners into security collectives (military with military, critical infrastructure with critical infrastructure, healthcare with healthcare, etc.).  Thereby achieving economies of scale in shared threat intelligence and greater vendor support through collective bargaining.

Is the anti-virus product not delivering the protection you (and your partners in the group) need?  Then take all of your money elsewhere to a vendor that will offer the level of service necessary.  Maybe you need a hunting service for the most advanced adversary.  Maybe you need advanced threat intelligence utilization.  Maybe you need just enough support to keep the next worm from infecting your network.

With security collectives, all customers need not be satisfied with the least-common-denominator of security.

Money talks, and big money talks even louder.  Threat intelligence is nice, shared threat intelligence is stronger.  We need to band together into collectives to make our voices heard and demand better service based on our respective missions.

8 Tips for Maintaining Cyber Situational Awareness

 

Situational awareness is the perception of your environment and comprehending the  elements within that environment with particular focus on those critical to decision making.

Cyber defenders, operators, and analysts must maintain “situational awareness.”  This is more than sitting in a room with several large televisions streaming Twitter and [insert management’s favorite cable news channel here].

Maintaining situational awareness is the act of continuously defining your environment and identifying (and comprehending) elements critical to decision-making.  The purpose of this act is so that one can continuously orient towards the best decision.

Those familiar with the OODA Loop will recognize this as the observe phase in the loop.

It is important to know and comprehend your environment, which means both your internal situation AND the external situation.

Knowing your internal situation usually comes with dashboards, alerts, network activity graphs, parsing log files, vulnerability scanners, updates from vendors, etc.  From this view an analyst finds particularly interesting events or anomalies and understand their organization’s exposure surface.

Most importantly, the situational awareness from these data points should provide a decision-making construct to identify necessary actions (e.g. “should we patch for that?”, “should we close that firewall hole?”, “should I explore that spike in traffic?”).

However, maintaining knowledge of the internal situation is not enough.  Just as a pilot must keep their eyes on their instruments AND the horizon an analyst must keep their eyes on their internal sensors AND the external threat environment.

Keeping track of just ONE of these environments is hard enough, how can an analyst hope to track both environments effectively,  make effective decisions on that information, and act on those decisions on time?

Both management and analysts dream of some tool that will quickly and easily integrate these disparate and complicated environments simply to make the best decisions quickly.  However until that dream tool is created:

1. Know your organization’s mission statement, business strategy, and business rules

You’ll never know what elements or events are important if you don’t know what is important to your organization.  Be able to articulate your organization’s mission statement.  How is your organization attempting to meet its goals and how do you support that?  How do the various business units work together to create cohesive whole?  With this information you can make an informed decision as to the criticality of an event based on the assets being affected.

2. Be cognizant of external events affecting your organization’s mission

What is happening in your market space or global sociopolitical space which is changing your security profile?  Will that new acquisition by a foreign competitor cause you to become a target of corporate espionage?  Will hackers target your organization in retaliation to country X expelling ambassadors from country Y?

3. Be aware of internal events

What is happening inside the organization?  Is there a new desktop load being deployed?  Who is being fired today?  What are the upcoming mergers/acquisitions?  All of these affect the exposure surface of an organization and it’s target profile to attackers.

4. Find and follow the best

The internet is the greatest collection of human knowledge ever assembled.  Use it.  There are great security researchers and analysts constantly updating information sources with critical knowledge.  Find these sources and follow them.  Use Twitter, Google Reader, Listorious, and other sources to help aggregate this information.  Who/What are the critical sources following?

5. Be aware and able to communicate what is missing

Know what is missing from your viewpoint.  Are there any data feeds which would add to the picture?  What are the biases and limitations of your data sets?  How do these affect your decision-making?  Knowing this in advance and taking it into account will help reduce poor decision-making and unexpected consequences.

6. Know the rule sets, analytics, and data sources

The better an analyst knows their own rule-sets, analytics, and data sources, the more efficiently and accurately they can distinguish critical from non-critical events.

7. Eliminate Useless Information

One must carefully balance the need for information with the danger of information overload which will cause poor or delayed decision-making.  Therefore, eliminate any useless information sources.  This includes high false positive hitting signatures, network activity graphs which nobody pays any attention to.  It is better to have less information of higher quality than high quantity which muddles decision-making.  Replace bad data feeds with something useful, or better yet don’t replace them at all.

8. Not Everyone Requires the Same Information

It is important for organizations to understand that everyone does not need the same information to maintain situational awareness.  People think differently.  Use that to your advantage.  Don’t try to make robots.  People perceive their environment differently from one-another.  Allow each to develop their own information feeds and visualizations to maximize effectiveness.

Don’t Hold Developers Blameless for Security Woes – Security Must Begin In Development

“WordPress is a very popular platform for attackers to target,” he said. “There’s not a whole lot WordPress can do if people neglect to update their WordPress or plug-in software.”  Roel Schouwenberg, senior researcher for Kaspersky Lab

SC Magazine recently published a summary of the Kaspersky report identifying WordPress as the attack vector for intruders to establish infrastructure to spread the Flashback/Flashfake OSX Trojan.  The article included quotes from Roel Schouwenberg, senior researcher for Kaspersky Lab.

Mr. Schouwenberg (@schouw), as seen in the above quote, held the WordPress developers effectively blameless for their product being used as the primary infection vector for a 600,000+ member botnet – about 2% of all Mac OS installations.

Don’t get me wrong, if you choose to host your own publicly facing service on the Internet (such as this one) you have an ethical duty to ensure that it is not being used for evil to the best of your abilities.  This includes regular updates and patching.

Additionally, Internet users share responsibility in ensuring the security of their own machines to the best of their ability.  This includes running effective Anti-Virus (yes, I’m talking to you OSX users) and patching software (such as Java Virtual Machine exploited by this trojan).

However, neither the operators of blogs (such as myself) or the end-users of computers (again, such as myself) cannot be expected to reverse engineer every piece of software or service that we execute to search for vulnerabilities and then patch those.  That is the sole responsibility of the software developer.  Therefore, they hold at least some of the blame.

Additionally, patching is almost always a losing security proposition. This is because the patch will almost always be in response to a known vulnerability.  Which is why zero-day exploits are still so effective.

The answer is to write secure software in the beginning.  I’m not saying that all software should be, or will be, free of vulnerabilities – I have written over 1,000,000 lines of code myself and I understand fully what it means to write secure code and the incredible, and maybe impossible, challenge that entails.  But, we must begin to hold software developers more accountable for vulnerabilities rather than placing all the blame on the users.  The developers are the only ones who can effectively preempt the attackers by preventing vulnerabilities in the first place and requiring a third-party plug-in architecture (such as in WordPress) to be secure as well (i.e. using sandboxes).

Microsoft has done a tremendous job in this endeavor.  It hasn’t been easy but over the last 10 years they have made significant progress as shown by the reduced vulnerability and exposure surface of their software as well as third-party software running on their operating system.  It can be, and should be, done.

 

Page 2 of 2

Powered by WordPress & Theme by Anders Norén