Always A Bad Day For Adversaries

Tag: risk

Questions for Evaluating an External Threat Intelligence Source

I’ve spoken before on the cost of poor threat intelligence and its risk to an organization.  I’ve also spoken about the 4 qualities of good intelligence: relevance, timeliness, accuracy, and completeness. To better evaluate threat intelligence sources – DRIVE FOR TRANSPARENCY!  If you treat threat intelligence like a black box you’re going to lose.

Here are questions to use when evaluating an external source. These are just a starting point or additions to your own list based on your unique needs.

[Relevance] Why do I need threat intelligence?

Before you go out evaluating threat intelligence sources, you need to know what you’re looking for.  This is best done using a threat model for your organization and asking where threat intelligence supports visibility and decision making within that model.  Remember, your own threat intelligence is almost ALWAYS better than that produced by an external source.  External intelligence should complement your own visibility and reduce gaps.

Kudos: Thanks to Stephen Ramage for his comment highlighting the exclusion of such a critical question.

[Relevance] What types of intelligence are available?

Strategic country-level reporting? Cyber threats mixed with political threats?  Technical indicators?  Campaign behaviors?  Written context?  These all determine how useful, actionable, and relevant the intelligence will be for your organization.

[Relevance] Give me your context!

Make sure you understand the context provided with any data.  There is a difference between threat data and threat intelligence.  Intelligence helps drive effective decision-making.  Context makes data relevant.

[Relevance] Which threat types?

Is it limited to botnet C2 nodes?  Commodity threats in general?  Does it cover targeted threats?  Does the threat intelligence provide insight into your threat model?

Related Questions: How many unique threats are distinguishable in the intelligence?

[Relevance] How many direct threats to my organization or those in my industry has your intelligence identified?

Has the source ever shown direct success in highlighting threats in your industry?

[Relevance] How is the intelligence made available to consumers?

If the intelligence is not provided in a usable form, it will not be successful.

[Relevance] What types of use-cases produce the best experience/feedback?  In which use cases has your intelligence failed?

This is a soft-ball question but one which should provoke a good question-answer session.  The answers will illuminate their decisions developing the intelligence and highlight where the intelligence may fit best (or not fit at all).

Related question: What threat model is this intelligence attempting to address?

[Completeness/Relevance] What is the source of the intelligence?

Is this intelligence derived from human sources crawling the dark-web?  Global network apertures?  VirusTotal diving?  This question should frame their visibility into threats and inform the types of intelligence expected.  This also highlights any natural biases in the collection.  Look for sources of external intelligence which complement your own internal threat intelligence capabilities.

[Completeness] What phases of the kill-chain does the intelligence illuminate?

Understand how wide, against any single threat, the intelligence goes.  Does it only show C2, or will it also illuminate pre-exploitation activities as well.  The wider the intelligence, the greater the likelihood of it being useful.

[Completeness] What is the volume and velocity of the intelligence?

“How much” intelligence is actually produced?  Numbers don’t matter that much – but if the number is ridiculously small or ridiculously large, it is an indicator of possible issues.

[Accuracy] How is the intelligence classified and curated?

Drive for transparency in their process which helps improve your evaluation on accuracy. Be wary of “silver bullet” buzz-word answers such as “machine learning” or “cloud.”

[Accuracy] How is the intelligence validated?

Do you want to track down false positives all day?  No!  Do you want to rely on poor analysis? No! Make sure this question gets enough attention.

Related questions: How often is it re-validated?  How are false positives handled?  How can customers report false positives?  What is your false positive rate?  How many times in the last month have you had to recall or revise an intelligence report?

[Accuracy] Does the intelligence expire?

Expiration of intelligence is key.  Is there a process which continuously validates the intelligence?

[Timeliness] How quickly is the intelligence made available to customers after detection?

Related questions: What part of your process delays intelligence availability?  What is the slowest time to availability from initial detection?

Two Computer Security Experts Jailed for Failure to Prevent Hospital Hack

Washington DC – After a major computer attack on a hospital network by a relatively unknown hacker caused the death of three patients there were many questions.  Why did the hacker do it?  Was the hospital doing enough protect its patients?  Why wasn’t the security good enough to prevent the attack?

Now that the trial has concluded we have some more answers, but still many questions remain.  First, we know that the hacker did not intend to attack a hospital.  He thought he was attacking a bank network as part of a protest movement.  He was relatively unskilled, using complicated but effective tools downloaded from the Internet.  He successfully survailed the bank network, but when it was time for the attack he mistakenly typed in a wrong number for his target unknowingly sending his tools to attack a hospital network.

The computers which were managing the newly installed electronic patient records, which included the medication and dosage, went down causing confusion throughout the hospital.  The records which normally hold critical information about a patient’s medical history, allergies, and current state were now gone.  Doctors and nurses who were on shift during the day did not know the correct dosage or even the correct drug to administer which were prescribed during the night shift.  This led to three patients either being given an overdose or another drug entirely causing a serious, and fatal, reaction.

The hacker was sentenced last month to a life for criminally negligent manslaughter of the three patients.  However, in a turn of events, two network security experts were charged with the protection of the hospital’s network are now in jail facing 10 years for their failure to prevent the attack.

Prosecutors argued that the security experts should have detected and prevented the attack well before the damage to the hospital record system.  They were specifically trained to do so and in the best position of anyone to detect the hacker and judge the risk.  Yet, their failure to do so put the lives of every patient in the hospital at risk and eventually caused the death of three.

The defendants argued that the network was far to large and complicated to be effectively defended and they could not have predicted every possible attack and it’s consequence.

In the end the jury agreed more with the prosecution than defendants.  What long-term consequences this holds is still unknown.

This is, of course, a fictitious story based on a real case of the jailed Italian scientists who were convicted of failing to effectively communicate the risk of a major earthquake.  300 people died in that earthquake.  As they say, hindsight is 20/20.  Looking back one could easily say that the earthquake was imminent given the signs.  But then those signs occur in many places around the world daily without the devastating effects of a large earthquake immediately following.

After reading the story of the earthquake scientists, I could not help but think of many scenarios where, as a security professionals, we are asked to assess the risk and ultimately prevent damage to life,safety, and national security critical networks and systems.  What if we were wrong and people died?  Let alone the guilt I could imagine feeling, would society at large hold us responsible?  Should we be held responsible?

I think back to my time studying computer ethics and the various ethical codes I have signed in my life agreeing to act responsibly, take responsibility for risk, and make good decisions.  Yet bad things happen.  And I cannot say whether society would judge my work good enough in such a situation.

We are the experts.  We are being paid to make the right decision in the protection of our networks.  There is nobody in a better position than us to make those decisions.  We know the network.  We know the systems.  We know the threat.  Yet we still fail.

Will our failure become so great one day that we are held to account for the death of innocents based on our faulty risk assessments and ineffective defenses?

Let us hope not.

Powered by WordPress & Theme by Anders Norén