ActiveResponse.org

Re-Imagining Cyber Security

2ropes

The Darker Side of Threat Intelligence: Cyber Stockholm Syndrome

Stockholm Syndrome is a psychological phenomenon described in 1973 in which hostages express empathy and sympathy and have positive feelings toward their captors, sometimes to the point of defending and identifying with the captors. - Wikipedia

Maturing as a threat intelligence analyst involves “living with your threat.”  In my interview process I ask potential analysts about threats they’ve tracked in their career.  Tracking a threat for months or years creates a unique learning environment and I look for that in analysts.  Unsurprisingly, in that environment an analyst becomes intimate with the adversary’s routines, their interests, and even begins to distinguish characteristics of individuals from within a larger group.  An analyst gets truly connected when they can successfully predict a threat’s activity.

However, while this sounds like an analytic panacea and also something threat intelligence production cells strive to build, it comes at a cost.  The risk is that analysts go beyond being closely connected and become “married” to a threat.  In living with that threat every day, spending all of your professional time studying them, spending hundreds of hours discussing them with others, it is impossible not to closely connect with the adversary on the other side of your screen.  Analysts become personally attached to the “bad guys” – a “Cyber Stockholm Syndrome.”  I personally know analysts who have fallen into depression when their threat goes away.

Not only is this unhealthy for the analyst, this relationship also affects their communication and infects their analytic capabilities reducing objectivity.

Symptoms of “Cyber Stockholm Syndrome”

  • An analyst gets particularly protective and defensive regarding perceived encroachment on their territory
  • An analyst unnecessarily hides intelligence and data to prevent others from knowing details helping to maintain their superiority
  • Overwhelming and obvious confirmation bias – an analyst “seeing their threat in everything”
  • An unwillingness to work on other threats even given clear direction and obvious priorities
  • An analyst continues to work on a threat even after the threat is “gone” against overwhelming evidence and analytic consensus

What may cause this?

One hypothesis: an analyst may associate their self-worth with an adversary.  As the analyst grows in mastery of knowledge of an adversary, they produce spectacular intelligence and amazing insight providing great value to others; this results in praise from leaders and admiration from peers creating a feedback loop.  The cycle strengthens the bond the analyst builds with a threat as the threat continues to provide value to the analyst.

What should happen?

When this happens managers may respond by immediately separating the analyst from the threat.  I don’t believe that is the right answer.  Separation causes resentment and potential psychological problems such as depression.  Instead, managers of analysts should look to slowly incorporate other analysts into the equation and ultimately strive to return the analyst to a proper relationship so they don’t lose that valuable expertise.

Most importantly, analysts must recognize this problem in themselves.  For their own professional and personal well-being.

Additional Discussion

Chris Sanders (@chrissanders88) made an excellent point that Stockholm Syndrome requires empathy with an aggressor which is lacking in my description.  I agree that the syndrome’s description includes that requirement but its exclusion from the DSM means there is no consistent definition.  Further, active academic discussion on the topic includes whether Stockholm Syndrome actually exists or is really one facet of a larger aggressor-bonding trait. While empathy is not the right aspect of the bond I describe here there is an attachment bond created either through the return on investment (ROI) the analyst receives through the adversary or otherwise.  This is evidenced by both the confirmation bias present and the sense of depression described by analysts.  I agree that the application of the Stockholm Syndrome may be imprecise.

Pie Chart

Keeping up with the Stream: How I Maintain External Situational Awareness

In any field related to intelligence and security it is critical to stay abreast with external news and developments.  But, your time is a zero-sum game and all security and intelligence analysts must balance their time “reading the news” (consuming news from others) with “creating the news” (generating new intelligence and insight for others) – this is how I view my work time strategically.  Building tools and techniques to more efficiently “read the news” allows you to spend more time “creating the news.”  So it is no surprise that I get asked regularly what I do to stay connected with the world and the community.  Here is my answer, for my particular situation and need.  Mileage will vary.

For me, the key is to take advantage of curated news/information streams instead of curating it myself.  However, just like relying on any one news source, relying on one or a few curators for your news will quickly introduce you to the bias of the curators themselves.  Therefore, I don’t rely entirely on this method and also self-curate to a small extent to lower that risk.

I organize my professional reading into three categories: world, profession (computer science/security/analysis/data science), and discipline (threat intelligence).  Usually, I begin by reading the world news, followed by threat intelligence, and lastly information I need about my profession.  I feel that this appropriately prioritizes my time and gives me the best perspective to solve problems throughout the day.

Here is my particular strategy:

  1. I begin with the top stories on Google News and then to the Economist.  I then browse the front page of Reddit.  Together this gives me a healthy sense of major events in the larger world.  This is critical because my discipline is heavily influenced by larger world events.  However, within this set I also focus my time reading articles which have direct impact on areas of world my daily work touches.
  2. I read curated security and intelligence emails: Team Cymru Dragon News Bytes; SANS NewsBites (weekly); and two others which come from paid services via my employer.
  3. Twitter.  I use key hashtags and user lists to pare down the stream to a consumable chunk.  This is very much an art form and I’ve yet to feel a mastery.
  4. RSS Feeds.  I use Feedly to curate my RSS feeds.  However, over time I’ve found that my other strategies tend to surface most of the gems from the feeds.
  5. If I have time, I’ll then use a financial news site to browse the news about my company as well as major players in cyber security to maintain awareness about the larger business pressures and events which may impact my work.
  6. Return to Twitter.  About 2-3 times/day I’ll return to Twitter to scroll through tweets by key hashtags and user lists to make sure I find anything critical right away.

The Long & Important Ones

About once-per-day I find a white paper or article on which I want to focus and absorb.  For those, I print them out (yes, on paper) and read them later with a pen in my hand so that I practice Active Reading; making marks, underlining, and making comments which help me absorb the material and create an internal conversation.  I find this a highly enjoyable activity which stimulates creativity and engagement helping to foster new ideas.

How do you maintain your external situational awareness?  Please comment below or tweet @cnoanalysis

guiding-principles

13 Principles of Threat Intelligence Communication

I have written at length about bad threat intelligence.  However, I think it is time that I spend the effort communicating my key principles to making great threat intelligence.  One aspect of great threat intelligence is great communication.  As I have said before, you may be the greatest analyst in the world, but if you can’t effectively communicate your knowledge then it is of little use.

I’ve found these principles apply to all modes of my communication when discussing threat intelligence with others.  They’ve guided me well and I hope they do the same for you.

Answer the Three Questions

All threat intelligence communication should work towards answering three critical questions, if you clearly articulate the answer to these questions your communication will be generally successful.

  1. What is it? (give me the information)
  2. Why should I care? (tell me about the threat and its relevance to me)
  3. What am I going to do? (enable my decision and action)

Maintain Your Focus

Focus is key to your communication – understand your audience and your objective and maintain that throughout.  Here are some elements which help me:

  • Remember the four qualities of good intelligence (CART): completeness, accuracy, relevance, and timeliness.  Fulfill them as best you can.
  • Remember the purpose of threat intelligence to inform and enable effective decision-making, whether that be tactical/technical, operational, or strategic.  You don’t need to provide EVERYTHING, only that which will support and enhance the intelligence.
  • Length matters: your communication should be as long as it needs to be but never longer than it should be.  Here’s a secret: it’s okay to not communicate everything in one vehicle – sometimes separating the material makes the threat intelligence more effective.
  • Don’t derail your audience.  After reading your 30 page report, make sure I know the value of the information and that you’ve addressed the key questions.  For example: don’t all of a sudden drop an unrelated element in your conclusion just because you want to make a point.

Analytic Integrity is All You Have

Intelligence is about trust.  When people can’t independently verify your findings and conclusions (which most won’t/can’t) then they must trust you.  You must create, support, and encourage that trust by practicing analytic integrity in your communications.  If you break that trust you lose your integrity and nobody will listen to you. Here are some of my rules to creating and encouraging trust with your audience:

  1. Don’t lie – if you don’t know, just say that
  2. Don’t embellish – don’t use hyperbole or language which might cause an over-reaction
  3. Don’t plagiarize – never intentionally (and avoid accidents) copy the work of another
  4. Practice humility – hubris infers overcompensation for weakness, be bold but not stupid

Be a Storyteller

Threat intelligence is a story – tell it as one.  Threat intelligence should have a beginning, middle, and an end.  Engage your audience.

The Summary IS the Communication

I know it sounds weird, but your summary is the most important part of your communication.  This is what people will remember and what they’ll rely on most afterwards.  For many, this is the only part to which they’ll pay attention.  The summary (or key points, etc.) should be par excellence.  I instruct analysts to spend at least 20% of their time on their summary and conclusion – it is that important.

As the old adage goes: “tell them what you’re going to tell them, tell them, tell them what you told them.”  This is CRITICAL advice and not often heeded by technical analysts.

However, I want to caution you.  Others suggest that following this old adage only bores an audience.  I agree that it is a pitfall for most, only because many follow the guidance without understanding it.  Avoid the summary and conclusion containing the same bullet points or phrasing – that is boring.  However, your summary/introduction/key points/etc.  and your conclusion should carry your key message and information, but in different ways.

Language Matters

The language you use greatly determines the effectiveness of your communication.

  • Use Active Voice – this isn’t some joke or regurgitation of high-school English.  It matters.  Active voice has been proven to decrease ambiguity and increase comprehension.  It improves your intelligence.
    • Science: “Certain syntactic constructions are known to cause the processor to work harder than others. Sentences with passive verbs are more difficult to comprehend than those with active verbs (Gough 1966; Slobin 1966; Olson and Filby 1972; Ferreira 2003) since they not only reverse the standard subject-verb-object order of the participants but are often used without a byphrase , which omits one participant altogether and can obscure the grammatical relations.”
  • Use Estimative Probability – judgements, hypotheses, and conclusions are never 100% certain; use words of estimative probability to clarify your certainty to your audience.
  • Clarity wins over all – don’t use complex language when simple will do.
  • Minimize subjective qualifications – avoid words/phrases like (sophisticated adversary) or (complex encryption) unless you can measure them either objectively or in comparison with others.  These phrases only add ambiguity.
  • Words mean things – don’t dilute your language or create a phrase when one already exists.
  • Analysis is not a religion – don’t use the word believe; hold measured judgements expressed in language differentiating fact and hypothesis.

Value Your Audience

Value their intelligence and their time.  They are not fish caught by click-bait or hyperbole but respected for their interest in your work.  Your audience is spending time with you because they think you have something valuable to communicate and they have come to learn something new – GIVE IT TO THEM!  Or, they will leave you.

Images are Powerful

Use images strategically to tell your story, reinforce critical concepts, and increase accessibility and understanding.  Images should not become overwhelming, distracting, or superfluous.

Write for Your Future Self

Communicating intelligence and analysis is HARD.  It’s hard because you’re trying to take a very complex cognitive process and share that with others.  I’m not the only one who has read something they wrote a year ago only to scratch my head and wonder what I was smoking.  I’ve found that to make this easy I simply imagine that I’m communicating to my future self – say 1, 2, or 3 years from now.  This helps ensure that I include important details which are obvious now but will be lost later.  Further, it ensures that I make my logic chains clear and easily followed by others.

Don’t be an Island

Be part of the community.  Respect the community.  Expand on the work of others and fill in knowledge gaps.  Confirm others’ findings and add support to their conclusions or hypotheses.  Add exculpatory evidence and provide alternative hypotheses.  And here’s a secret: it’s okay to point to the analysis of others in your communication – you don’t always have to self-reference.  This actually adds value for your audience and makes you more valuable to them because they trust you’re going to tell them the whole story – not just your story.

Respect Your Adversaries

Don’t belittle adversaries in your threat intelligence.  Don’t give them undue credit, but also don’t take away from their effectiveness.  This will only lead to hubris – and hubris is deadly.  We all know of an analyst who called a threat “unsophisticated” or “simple” only to later report a massive compromise.

Be Bold, Be Honest, Be Right, But Always Be Willing to be Wrong

I’ve said it before, I like my analysis like I enjoy my coffee, bold.  I want analysts to be analysts – not reporters.  I want to hear ideas, conjecture, assessment, opinions.  I want those clearly separated from the facts.

Separate Fact From Everything Else

This is a pretty simple rule.  But harder to follow in practice while working through a complex analysis.  Strive to use language, format, font, etc. to separate fact from hypothesis.  Because threat intelligence enables decision-making, decision makers (whether a SOC analyst, a CIO, or whoever) should make their own judgement based on your analysis.  If your facts and hypotheses are indistinguishable it is highly likely they’ll make poor decisions based on misinterpreted analysis.

Water Splash

Cyber Threat Language Dilution

A “trojanized document” hides malware inside itself, but rarely do we call a webpage doing the same a “trojanized webpage”.  The word Trojan, derived from Homer’s epic poem, intended to describe a seemingly innocuous object containing damaging material, now describes almost all cyber threat delivery vectors.  The term “Trojan” in cybersecurity has become diluted to the point of nonsense.

Trojan is just one example in a diluted language space now including other terms like virus, rootkit, targeted, etc.  As the community grows in both terms of depth and breadth, it will carry with it historical baggage and loose terminology.  Poor phraseology will infect those writing on the topic not familiar with nuances further contributing to the problem.  Lastly, as cyber threats grow and change the language must evolve as well causing further issues.  For example, increased modularization of capabilities challenge attempts to clearly categorize with existing language.

This is a problem for effective threat intelligence communication.  Good threat intelligence accurately communicates the context of the threat relativizing it to a risk environment.  A reliance on diluted language increases ambiguity therefore decreasing accuracy and effectiveness.

My message to those responsible for communicating cyber threats: consider language dilution, both your own actions contributing to dilution but also leveraging diluted language and its effect on your customers.  Language dilution is a fact-of-life for any discipline, but how it’s addressed makes the difference.

 

NamesEverywhere

Names…Names Everywhere! The Problem, and Non-Problem, of Name Pollution

Naming pollution is real.  It’s a real problem.  First anti-malware/AV malware detection names, now APT group names – and their campaigns – and their malware.  Analysts are in love with names – and marketing is in love with their names.

You see, naming is powerful.  It’s why we agonize over a child’s name.  It’s why (in the Judeo-Christian tradition) God’s name was truncated and not to be uttered.  At about 2 years old we start learning the names of things and are able to start uttering them back.  This gives us power, because when the 2-year-old is able communicate a thing’s name – we give it to them!  It’s powerful to a 2-year-old and that same power follows us throughout life – see “name dropping” – or the honor of naming a new geographic/astrological feature.

EveryoneGetsAName

It’s followed us into the information security space – for both good and bad.  You see, we need names.  Names are important.  It’s part of how we organize cognitive information and make sense of our world – through abstraction.  It’s important to how we communicate.  But, like any power, it can be misused and misappropriated.  Every organization now loves to name “adversaries,” “actors,” “activity groups,” or whatever you call them.  They can blog about it, tweet about it, produce nice glossy materials and presentations.  It gives them power – because that’s what names do.

The problem isn’t names, it’s the power we attribute to them and their use in our analysis.  When ThreatToe calls something BRUCESPRINGSTEEN and CyberCoffin identifies a similar activity and names it PEARLJAM, everyone else starts updating their “Rosetta Stone” and makes the association BRUCESPRINGSTEEN = PEARLJAM.  Everyone else now starts attributing their intelligence to these two named groups.  But, nobody actually knows what the heck these things are aside from a few properties (e.g. IPs/domains/capabilities/etc).  That is not enough to understand.

I can’t tell you how many time’s I’ve heard: “Did you see the recent report from CyberVendor – can you believe they attributed that activity to PEARLJAM?!  That is clearly STEVIEWONDER – those guys don’t know what they’re talking about.”  The problem with that statement is that assumes: (1) you actually know what you’re talking about (you’ve correct correlated activity) and (2) you understand their definition of PEARLJAM.  Within their own analytic definition the correlation could be absolutely correct.  It’s that we’ve made unfounded assumptions and assigned too much power to the names.

NamesEverywhereBut, WHY CAN’T WE JUST ALL AGREE ON NAMES!!!!! (as this is usually said in an elevated tone and usually while slightly-intoxicated)  Because we can’t.  That’s why.  It’s not about the names.  The names are just crutches – simple monikers for what is very complex activity and analytic associations which we still don’t know how to define properly.  To understand this, you need to understand how we’re actually defining, correlating, and classifying these into groups – read the Diamond Model section 9 for this information.

The simple answer: it’s hard enough to correlate activity consistently within a 10 person team let alone across a variety of organizations.  The complex answer: correlation and classification is a complex analytic problem which requires us to share the same grouping function and feature vector.

What we shouldn’t do is to start using each other’s names – because, again, it’s not about the names.  If you begin to use the names of others you start to take on their “analytic baggage” as well since you are now intimately associating your analysis with theirs.  This means you may also take on their errors and mis-associations.  Further, it may mean that you agree with their attribution.  Its highly unlikely that you’ll want intertwine your analysis with that of others whose you don’t really understand.

Instead, we need to rely on definitions.  We need to openly share our correlation and classification logic and the feature vectors which we’re applying.  But to those who are now saying, “Finally! An answer!  Let’s just share this!” sorry, it’s not a silver bullet.  Because, the feature vector is highly dependent on visibility.  For instance, some organizations have excellent network visibility, some have outstanding host visibility, others may have great capability/malware visibility, etc.  It means that generally, I need the same visibility as another organization to effectively use the shared functions to produce accurate output.

So, reader, here I am, telling you about this problem forcing poor analytic practices on daily basis causing us all these issues but without a real solution in sight.  Yes, I think that sharing our definitions will get a LONG way towards improving correlation across organizations and giving those names real value – but it is by no means a silver bullet.  I’m a proponent of this approach (over pure name/Rosetta stone work) but I know we’ll still spend hours on the phone or in a side conversation at a conference hashing all of this out anyways.  But maybe, just maybe, it will reduce some analytic errors – and if that is the case it is better than what we have today.

Questions

Questions for Evaluating an External Threat Intelligence Source

I’ve spoken before on the cost of poor threat intelligence and its risk to an organization.  I’ve also spoken about the 4 qualities of good intelligence: relevance, timeliness, accuracy, and completeness. To better evaluate threat intelligence sources – DRIVE FOR TRANSPARENCY!  If you treat threat intelligence like a black box you’re going to lose.

Here are questions to use when evaluating an external source. These are just a starting point or additions to your own list based on your unique needs.

[Relevance] Why do I need threat intelligence?

Before you go out evaluating threat intelligence sources, you need to know what you’re looking for.  This is best done using a threat model for your organization and asking where threat intelligence supports visibility and decision making within that model.  Remember, your own threat intelligence is almost ALWAYS better than that produced by an external source.  External intelligence should complement your own visibility and reduce gaps.

Kudos: Thanks to Stephen Ramage for his comment highlighting the exclusion of such a critical question.

[Relevance] What types of intelligence are available?

Strategic country-level reporting? Cyber threats mixed with political threats?  Technical indicators?  Campaign behaviors?  Written context?  These all determine how useful, actionable, and relevant the intelligence will be for your organization.

[Relevance] Give me your context!

Make sure you understand the context provided with any data.  There is a difference between threat data and threat intelligence.  Intelligence helps drive effective decision-making.  Context makes data relevant.

[Relevance] Which threat types?

Is it limited to botnet C2 nodes?  Commodity threats in general?  Does it cover targeted threats?  Does the threat intelligence provide insight into your threat model?

Related Questions: How many unique threats are distinguishable in the intelligence?

[Relevance] How many direct threats to my organization or those in my industry has your intelligence identified?

Has the source ever shown direct success in highlighting threats in your industry?

[Relevance] How is the intelligence made available to consumers?

If the intelligence is not provided in a usable form, it will not be successful.

[Relevance] What types of use-cases produce the best experience/feedback?  In which use cases has your intelligence failed?

This is a soft-ball question but one which should provoke a good question-answer session.  The answers will illuminate their decisions developing the intelligence and highlight where the intelligence may fit best (or not fit at all).

Related question: What threat model is this intelligence attempting to address?

[Completeness/Relevance] What is the source of the intelligence?

Is this intelligence derived from human sources crawling the dark-web?  Global network apertures?  VirusTotal diving?  This question should frame their visibility into threats and inform the types of intelligence expected.  This also highlights any natural biases in the collection.  Look for sources of external intelligence which complement your own internal threat intelligence capabilities.

[Completeness] What phases of the kill-chain does the intelligence illuminate?

Understand how wide, against any single threat, the intelligence goes.  Does it only show C2, or will it also illuminate pre-exploitation activities as well.  The wider the intelligence, the greater the likelihood of it being useful.

[Completeness] What is the volume and velocity of the intelligence?

“How much” intelligence is actually produced?  Numbers don’t matter that much – but if the number is ridiculously small or ridiculously large, it is an indicator of possible issues.

[Accuracy] How is the intelligence classified and curated?

Drive for transparency in their process which helps improve your evaluation on accuracy. Be wary of “silver bullet” buzz-word answers such as “machine learning” or “cloud.”

[Accuracy] How is the intelligence validated?

Do you want to track down false positives all day?  No!  Do you want to rely on poor analysis? No! Make sure this question gets enough attention.

Related questions: How often is it re-validated?  How are false positives handled?  How can customers report false positives?  What is your false positive rate?  How many times in the last month have you had to recall or revise an intelligence report?

[Accuracy] Does the intelligence expire?

Expiration of intelligence is key.  Is there a process which continuously validates the intelligence?

[Timeliness] How quickly is the intelligence made available to customers after detection?

Related questions: What part of your process delays intelligence availability?  What is the slowest time to availability from initial detection?

CART: The 4 Qualities of Good Threat Intelligence

I write often of poor quality threat intelligence which pervades the security community.  Poor quality threat intelligence not only has a heavy cost on its consumers, it also threatens the confidence threat intelligence consumers place in their providers.  Confidence is the cornerstone of threat intelligence.  Nobody will take intelligence from an untrustworthy source and act – at least they shouldn’t.  It is important that the producer and consumer trust each other.  That trust needs to be based on transparency and verification.

However, how does one appropriately assess threat intelligence?  The first step must be to identify the qualities which define “good” threat intelligence.  However, these are not binary qualities – there is a clear gradient based on use case.  Timeliness is a good example of this gradient as some intelligence (likely more strategic) has a more fluid timeliness requirement while tactical threat intelligence has stricter requirements.

Further, one single threat intelligence source will not likely be able to satisfy all qualities simultaneously.  For instance, it is unlikely any one provider will have complete visibility across Diamond elements or Kill Chain phases and consumers will have to rely on more than one to achieve satisfactory completeness.

The four qualities are (CART): Completeness, Accuracy, Relevance, and Timeliness.

Completeness

Threat intelligence must be sufficiently complete to provide effective detection and (hopefully) prevention.  For instance, providing a domain indicator used in the exploitation of only one victim is not sufficient for other victims and therefore the intelligence is effectively incomplete and unhelpful.

Accuracy

Threat intelligence must save organizations more in success than it costs them in errors and mistakes.

Relevance

Threat intelligence must address a threat to the organization in a method that allows for effective action.  Intelligence addressing threats not faced by the organization is of no value.  Further, intelligence delivered in a type or method not usable by the organization is also unhelpful.

Timeliness

Threat intelligence must be received and operationalized fast enough to make an impact more valuable than the cost of the threat intelligence itself.

i-need-money-2png

The Cost of Bad Threat Intelligence

There is no doubt that threat intelligence is now “a thing.” At RSA 2015 I couldn’t help but notice how many vendor booths were hawking their relevance to threat intelligence.  I hear about a threat intelligence start-up almost weekly.  That is not surprising given venture capital is flowing and C-suite customers are now investing in “threat intelligence.”  Everyone wants a piece of the pie.

While market growth for threat intelligence produces innovations it also produces negative by-products (welcome to capitalism).  The most concerning by-product is the reduction in threat intelligence quality.

A growing number of published threat intelligence reports contain inaccuracies and poor analysis.  A growing number of indicators across a variety of producers are either stale, irrelevant, or generate so many false positives to be useless.

What so many fail to realize is the cost of poor quality intelligence.  Here are some of the costs:

  • If a single threat intelligence-sourced alert generates $1000 worth of time to investigate a false positive, it is easy to see how that relatively small amount can multiple within an organization and across enterprises worldwide.
  • If an intelligence producer reports incorrectly categorizes a threat as APT (say instead of cyber crime) an organization’s security response to the threat will be (and should be) different likely involving a deeper investigation.  Again, this additional, and likely unnecessarily deep, investigation is costly in both time and resources.
  • Every poor quality report costs time to read and digest.  Time that could be spent understanding a high-quality report.
  • Every poor association or correlation derails an analytic effort at an organization.

Because organizational security resources are finite and already stretched thin these mistakes, errors, and poor practices consume critical resources which could be spent on other problems and reduces the security of an organization.

Two market elements have caused this quality reduction:

  • A need to garner attention in the growing cacophony of the threat intelligence market feeding a “first to publish” mentality which usually results in a “rush to publish.”
  • A lack of customer education resulting in a poor evaluation of providers thereby incentivizing the wrong aspects of threat intelligence – such as volume of indicators over their quality or relevance

Obviously, only threat intelligence providers can solve the problem, but what pressures can help drive effective change?  Here are some:

  • Threat intelligence customers armed with evaluation criteria (particularly quality metrics) which helps them leverage threat intelligence effectively without generating unnecessary costs – this will help create market drivers for higher quality
  • Industry must self-police bad intelligence by being honest with ourselves and each other.
  • Threat intelligence aggregation platforms should have quality assessment capabilities informing the intelligence consumer of potential problems (likewise they are also be in a position to highlight timely, relevant, and unique intelligence of great value)
  • Threat intelligence analysts trained in analytic tradecraft stressing quality and accepting an ethical duty

Security professionals practicing threat intelligence must understand the implications of mistakes and poor analysis.  Bad intelligence can and does decrease the security effectiveness of an organization. Therefore it is an ethical duty of the threat intelligence practitioner to reduce errors. Threat intelligence is difficult – intelligence by definition attempts to illuminate the unknown and works by making judgments with imperfect data – errors are natural to the domain.  But, with proper practices and procedures bad intelligence can, and must, be minimized.

What I think when I read most threat intelligence reporting

15 Things Wrong with Today’s Threat Intelligence Reporting

What I think when I read most threat intelligence reporting

What I think when I read most threat intelligence reporting

As I have written before, intrusion analysis is equal parts knowing the technical elements of an intrusion and being an analyst.  However, most in this domain spend an inordinate amount of time studying technical details compared to honing any analytic skills.

How long has it been since you’ve taken a highly technical course?  (probably within the last year or two)  How about an analysis course?  (probably in the last 5 years, 10 years, never?)

I read several threat intelligence reports daily.  It is painfully obvious how the lack of analytic skill is harming the discipline. Many folks come from technical degree backgrounds and analyze packets and binaries well enough but can’t seem to tell the difference between inductive, deductive, or abductive reasoning.  Furthermore, their managers and mentors never recognize a problem, they just send them to more technical courses.

What is the risk?  Threat intelligence provides insight and context to improve decision making.  The risk of bad intelligence is high. Bad decisions can easily be made from poor intelligence – potentially doing more harm than good.  Good analytic practices improve analysis thereby decreasing the risk of poor intelligence.  You could have the best packet analysis skills in the world, but if you cannot communicate your conclusions effectively to those who need to act on your information those skills are effectively useless in threat intelligence.

We need to do better.  I started this post about a month ago and wrote down a “lesson” whenever I saw an example of poor analysis.  Needless to say, I saw some of these several times.  (Contrary to the recommendation of others, I will not cite/quote specific examples – I believe that would only name and shame others)

Trend – the word actually means something

How many times per week must I read about a new “trend” from threat intelligence?  One or two events does not constitute a trend.  Even three or more events, depending on the universe of events, may not constitute a trend.  Trends are serious.  True trends in adversary activity and methodologies inferred by threat intelligence should drive data collection, analytic tradecraft, and defensive decisions.  Before you start throwing out the word trend just because you’ve seen something a few times, consider the millions of other events you’re not seeing and consider if they’re just an anomaly rather than a trend.

Analysts and conclusions are like horses: sometimes you need to lead them to water

In many cases I can follow the logical progression of hypotheses and facts to the conclusion.  In some cases I cannot.  Either because an analyst failed to include the appropriate evidence/fact on which now an assumption must rest or because of convoluted logical reasoning.  Ensure evidence supports your conclusions and the logical reasoning is clear.  Don’t assume that what is clear in your mind will be clear in mine.

You can’t be completely confident all of the time – use words of estimative probability

Do you know how often I see the effective use of estimative probability in recent threat intelligence reporting?  Almost never.  This is a problem.  Not everything presented is irrefutable fact; in fact, a good analysis will have a proper mix of data/fact, hypotheses and conclusions.  The confidence values of these conclusions vary.  When you don’t effectively apply estimative probability and variable measures of confidence it removes value from the analysis and increases the risk of poor decision making by consumers.  First, if you don’t know what estimative probability is, LEARN about it.  Then learn how and when to apply it properly. Importantly, also know what words/phrases to avoid (i.e. weasel words).

Never be afraid to include contrary evidence

Do you know how many times I saw evidence contrary to the conclusion presented in a threat intelligence report this month?  Never.  Practice analytic honesty.  If there is exculpatory evidence, contrary evidence, or an alternative hypothesis – show it.  As long as you’re following some of the other lessons here (e.g., separating fact and hypothesis, using words of estimative probability) it will strengthen your analysis and provide more value to the consumer.

Just because you’ve seen something for the first time doesn’t mean it’s the first time it happened

We all love finding something awesome and telling the world.  It’s cool because we all want to know what you’ve found!  But, please don’t assume it is the first time it has happened or even the first time it has been seen.  Having confidence is critical, but hubris is deadly to analysis.

Don’t operate on an island

You are not alone!  Don’t act like it.  Share and consume, enrich and enhance.  Go ahead and build on the analysis of others (citing appropriately).  Whatever your observation point or data sources, they’re not omnipresent.  I rarely see analysis reference other (obviously) related pieces.  How is that?  The power of defenders lies in our community and our ability to work together against an adversary.

Be bold, but don’t be stupid

I like my analysis like I like my coffee: bold.  But, there is a line between taking facts to their logical conclusion and taking facts to crazy-land.  The difference is logic.  Ensure your conclusions and hypotheses follow logically from the facts through induction, deduction, or abduction.  If your conclusions cannot be logically traced or tested, then they’re likely living in crazy-land.

Don’t mix hypotheses, conclusions, and facts

Hypotheses, conclusions, and facts are not the same.  Your intelligence reports should not treat them as such.  Ensure that your readers can effectively separate the three through your use of language, formatting, etc.  When the three are confused it can lead to erroneous assumptions by consumers and lead to decisions made on weak conclusions rather than facts.

Save hyperbole for the glossy promotion material

Hyperbole has its place.  It doesn’t have a place in threat intelligence.  Save that for the glossies.  Be precise, honest, and accurate.  Don’t embellish or exaggerate.  Trust me when I say we have enough people running around like chickens with their heads cut off in this field.

Logical fallacies are just that, get to know them

Enough said.  I’m sorry I have to say this, but please understand the differences and applicability of deductive, inductive, and abductive reasoning BEFORE writing your first threat intelligence report.  Or, at the very least, have an editor who knows the difference.

Don’t create new words when existing words suffice

I’m not going to name-call here.  You know who you are.  There are times when words/phrases have multiple meanings.  I understand that.  But, aside from that….stop it.

Tell a story!

Your analysis is a story.  You’re effectively documenting history – studying the past – in the hopes of making conclusions and judgments which will help the present and future.  While you are documenting the activity of computers you are ultimately describing the actions caused by adversaries.  Just like any story your report should have a beginning, middle, and an end.

Answer my questions

Write as if you are in a conversation.  Think about what somebody else may ask of what you’re saying, and address those questions in the text.  Any questions left unanswered have the ability to form into assumptions on the part of the consumer/customer.  If you don’t have an answer, feel free to write: no further information.

Be concise, be accurate

Practice analytic honesty and respect the time of your reader.  The report you’re considering may actually need to be three different reports – one describing all of the malware reverse engineering, one describing all of the network activity, and another describing the threat itself which references the other report.  Putting everything in one report does not make it more consumable, it makes it less consumable and allows analysts to muddle up various lines of analysis.

Describe diagrams, charts, and tables both in the narrative text but also in a caption

This is just a pet-peeve of mine, but one which I find increases the readability of threat intelligence reports.  Make sure you describe your diagrams, charts, and tables in both the narrative text (as part of the story) and also in a caption.  I find this necessary because as I move backwards and forwards through a report reading and re-reading, forming and re-forming logical chains, I don’t want to hunt for the description in the text every time.  I also don’t want to jump to the caption in the middle of text if not necessary which breaks my concentration.

A search to reveal websites hosted on an IP address

Discover All Websites Hosted on an IP Address

There have been many times I’ve worked to discover all websites being hosted on a single host address (e.g., IP address).  This required some effort and none of my techniques generated anything I considered comprehensive or authoritative.  Usually only a list good enough to get my analysis to the next step.

I found this very useful post today on how to accomplish that exact task, easily.  There is also a bash script posted to do this from the command line.

http://robert.penz.name/722/howto-find-all-websites-running-on-a-given-ip-address/

When I saw the post I immediately recognized how obvious this is.  Of course search engines know this information!  They crawl the web constantly visiting every website and they must have a website-IP mapping.  But, until know I didn’t know they exposed this mapping.

The post shows to how do it in Bing – simply use the following syntax:

ip:XXX.XXX.XXX.XXX

Click here to see an example using current IP address of this website, activeresponse.org: http://www.bing.com/search?q=ip%3A69.195.124.131&go=Submit+Query&qs=bs&form=QBRE

 

A search to reveal websites hosted on an IP address

A search to reveal websites hosted on an IP address

One challenge I see of this technique is that Bing does not expose the timestamp of this information.  Of course Bing would do some caching of information for performance purposes and as such I cannot guarantee that all of these sites are still hosted on that same IP address.  Given the nature of dynamic hosting and cloud services websites can move around pretty quickly depending on their hosting service.

Therefore, and as I’ve cautioned previously on this blog, ensure you know your data source and their biases and limitations.  In this case the data may be cached and out-of-date.

 

 

On the other hand, having a cache showing what was hosted where in the past is also helpful.

But, it’s a pretty cool and helpful capability to have.

Let me know if you have any other easy ways to accomplish this task!  I tried the task with other search engines but was unsuccessful.

DISCLAIMER: I am employed by Microsoft

Page 1 of 4

Powered by WordPress & Theme by Anders Norén