Always A Bad Day For Adversaries

Tag: ethics

The Cost of Bad Threat Intelligence

There is no doubt that threat intelligence is now “a thing.” At RSA 2015 I couldn’t help but notice how many vendor booths were hawking their relevance to threat intelligence.  I hear about a threat intelligence start-up almost weekly.  That is not surprising given venture capital is flowing and C-suite customers are now investing in “threat intelligence.”  Everyone wants a piece of the pie.

While market growth for threat intelligence produces innovations it also produces negative by-products (welcome to capitalism).  The most concerning by-product is the reduction in threat intelligence quality.

A growing number of published threat intelligence reports contain inaccuracies and poor analysis.  A growing number of indicators across a variety of producers are either stale, irrelevant, or generate so many false positives to be useless.

What so many fail to realize is the cost of poor quality intelligence.  Here are some of the costs:

  • If a single threat intelligence-sourced alert generates $1000 worth of time to investigate a false positive, it is easy to see how that relatively small amount can multiple within an organization and across enterprises worldwide.
  • If an intelligence producer reports incorrectly categorizes a threat as APT (say instead of cyber crime) an organization’s security response to the threat will be (and should be) different likely involving a deeper investigation.  Again, this additional, and likely unnecessarily deep, investigation is costly in both time and resources.
  • Every poor quality report costs time to read and digest.  Time that could be spent understanding a high-quality report.
  • Every poor association or correlation derails an analytic effort at an organization.

Because organizational security resources are finite and already stretched thin these mistakes, errors, and poor practices consume critical resources which could be spent on other problems and reduces the security of an organization.

Two market elements have caused this quality reduction:

  • A need to garner attention in the growing cacophony of the threat intelligence market feeding a “first to publish” mentality which usually results in a “rush to publish.”
  • A lack of customer education resulting in a poor evaluation of providers thereby incentivizing the wrong aspects of threat intelligence – such as volume of indicators over their quality or relevance

Obviously, only threat intelligence providers can solve the problem, but what pressures can help drive effective change?  Here are some:

  • Threat intelligence customers armed with evaluation criteria (particularly quality metrics) which helps them leverage threat intelligence effectively without generating unnecessary costs – this will help create market drivers for higher quality
  • Industry must self-police bad intelligence by being honest with ourselves and each other.
  • Threat intelligence aggregation platforms should have quality assessment capabilities informing the intelligence consumer of potential problems (likewise they are also be in a position to highlight timely, relevant, and unique intelligence of great value)
  • Threat intelligence analysts trained in analytic tradecraft stressing quality and accepting an ethical duty

Security professionals practicing threat intelligence must understand the implications of mistakes and poor analysis.  Bad intelligence can and does decrease the security effectiveness of an organization. Therefore it is an ethical duty of the threat intelligence practitioner to reduce errors. Threat intelligence is difficult – intelligence by definition attempts to illuminate the unknown and works by making judgments with imperfect data – errors are natural to the domain.  But, with proper practices and procedures bad intelligence can, and must, be minimized.

Two Computer Security Experts Jailed for Failure to Prevent Hospital Hack

Washington DC – After a major computer attack on a hospital network by a relatively unknown hacker caused the death of three patients there were many questions.  Why did the hacker do it?  Was the hospital doing enough protect its patients?  Why wasn’t the security good enough to prevent the attack?

Now that the trial has concluded we have some more answers, but still many questions remain.  First, we know that the hacker did not intend to attack a hospital.  He thought he was attacking a bank network as part of a protest movement.  He was relatively unskilled, using complicated but effective tools downloaded from the Internet.  He successfully survailed the bank network, but when it was time for the attack he mistakenly typed in a wrong number for his target unknowingly sending his tools to attack a hospital network.

The computers which were managing the newly installed electronic patient records, which included the medication and dosage, went down causing confusion throughout the hospital.  The records which normally hold critical information about a patient’s medical history, allergies, and current state were now gone.  Doctors and nurses who were on shift during the day did not know the correct dosage or even the correct drug to administer which were prescribed during the night shift.  This led to three patients either being given an overdose or another drug entirely causing a serious, and fatal, reaction.

The hacker was sentenced last month to a life for criminally negligent manslaughter of the three patients.  However, in a turn of events, two network security experts were charged with the protection of the hospital’s network are now in jail facing 10 years for their failure to prevent the attack.

Prosecutors argued that the security experts should have detected and prevented the attack well before the damage to the hospital record system.  They were specifically trained to do so and in the best position of anyone to detect the hacker and judge the risk.  Yet, their failure to do so put the lives of every patient in the hospital at risk and eventually caused the death of three.

The defendants argued that the network was far to large and complicated to be effectively defended and they could not have predicted every possible attack and it’s consequence.

In the end the jury agreed more with the prosecution than defendants.  What long-term consequences this holds is still unknown.

This is, of course, a fictitious story based on a real case of the jailed Italian scientists who were convicted of failing to effectively communicate the risk of a major earthquake.  300 people died in that earthquake.  As they say, hindsight is 20/20.  Looking back one could easily say that the earthquake was imminent given the signs.  But then those signs occur in many places around the world daily without the devastating effects of a large earthquake immediately following.

After reading the story of the earthquake scientists, I could not help but think of many scenarios where, as a security professionals, we are asked to assess the risk and ultimately prevent damage to life,safety, and national security critical networks and systems.  What if we were wrong and people died?  Let alone the guilt I could imagine feeling, would society at large hold us responsible?  Should we be held responsible?

I think back to my time studying computer ethics and the various ethical codes I have signed in my life agreeing to act responsibly, take responsibility for risk, and make good decisions.  Yet bad things happen.  And I cannot say whether society would judge my work good enough in such a situation.

We are the experts.  We are being paid to make the right decision in the protection of our networks.  There is nobody in a better position than us to make those decisions.  We know the network.  We know the systems.  We know the threat.  Yet we still fail.

Will our failure become so great one day that we are held to account for the death of innocents based on our faulty risk assessments and ineffective defenses?

Let us hope not.

Powered by WordPress & Theme by Anders Norén