Always A Bad Day For Adversaries

Month: April 2012

Security Must Not Forget the User

Hotel Internet Network Intrusion Detection System at Work

I received this message from my hotel Internet provider which took action to limit my access for 10 minutes to 56Kbs due to some unknown intrusion detection signature/heuristic.

I was both impressed that a hotel would have such a device in-line to protect the general Internet from aggressive and potentially damaging users and angered by the punitive action taken against my innocuous activity.

Like a well-trained security professional, I immediately took action to mitigate any damage to my system from unwanted malware.   I ran my security tools (anti-virus, software updates,  spyware/adware removal, etc.).  After those did not find anything I assumed a false positive, sucked up the slow Internet, went and read a book, and then returned to my ‘blazing fast’ 2Mbs Internet access.

It was not an hour later that I again received the message and punitive action.  This time I ran Wireshark and sniffed all the traffic to/from my laptop for the next hour and analyzed the output.  I found nothing of interest.  I was now convinced this was a false positive after using my years of security knowledge and forensic ability and finding nothing suspicious on my laptop.  (I am not going to assume there was nothing, but I can only go as far as I can).

Now I was just upset.  We in security like to think of ourselves as more knowledgeable than the average user about threats and mitigations.  We can find threats they cannot and we can furthermore mitigate those threats for them without their knowledge protecting them on the front-lines (e.g. Gateway, ISP, etc.).

However, we must also remember that computing systems are here for users – that is their entire purpose, to ultimately provide a benefit to human users.  Therefore, security must always take the user into account and include them whenever possible.

Security must begin and end with the user.  This means that when security is first envisioned it must understand the purpose of the system and the needs of a user.  If security were to make a system unusable, then there is no purpose in the system even existing and hence our existence as security professionals is questioned.  Second, this means that users must be included when possible in the security cycle.

We must help users help themselves!  Messages such as the one above (e.g. “There is a problem with your system”) do no good.  It does not help solve any problem.  It actually makes the problem worse because now the user must spend time trying to fix a problem that may or may not exist.  http://www.frontend.com/design/effective-error-messages.html

Second, it does not inform, increase the knowledge of, or educate the user in any way.  This message did not inform the specific detection (e.g. signature/heuristic), suggest effective mitigation, nor provide a suggested severity of the threat.

[important]We in the security community need to better incorporate human factors/user interface knowledge into security and integrate the user from the beginning to the end of our security engineering.[/important]

 

8 Tips for Maintaining Cyber Situational Awareness

 

Situational awareness is the perception of your environment and comprehending the  elements within that environment with particular focus on those critical to decision making.

Cyber defenders, operators, and analysts must maintain “situational awareness.”  This is more than sitting in a room with several large televisions streaming Twitter and [insert management’s favorite cable news channel here].

Maintaining situational awareness is the act of continuously defining your environment and identifying (and comprehending) elements critical to decision-making.  The purpose of this act is so that one can continuously orient towards the best decision.

Those familiar with the OODA Loop will recognize this as the observe phase in the loop.

It is important to know and comprehend your environment, which means both your internal situation AND the external situation.

Knowing your internal situation usually comes with dashboards, alerts, network activity graphs, parsing log files, vulnerability scanners, updates from vendors, etc.  From this view an analyst finds particularly interesting events or anomalies and understand their organization’s exposure surface.

Most importantly, the situational awareness from these data points should provide a decision-making construct to identify necessary actions (e.g. “should we patch for that?”, “should we close that firewall hole?”, “should I explore that spike in traffic?”).

However, maintaining knowledge of the internal situation is not enough.  Just as a pilot must keep their eyes on their instruments AND the horizon an analyst must keep their eyes on their internal sensors AND the external threat environment.

Keeping track of just ONE of these environments is hard enough, how can an analyst hope to track both environments effectively,  make effective decisions on that information, and act on those decisions on time?

Both management and analysts dream of some tool that will quickly and easily integrate these disparate and complicated environments simply to make the best decisions quickly.  However until that dream tool is created:

1. Know your organization’s mission statement, business strategy, and business rules

You’ll never know what elements or events are important if you don’t know what is important to your organization.  Be able to articulate your organization’s mission statement.  How is your organization attempting to meet its goals and how do you support that?  How do the various business units work together to create cohesive whole?  With this information you can make an informed decision as to the criticality of an event based on the assets being affected.

2. Be cognizant of external events affecting your organization’s mission

What is happening in your market space or global sociopolitical space which is changing your security profile?  Will that new acquisition by a foreign competitor cause you to become a target of corporate espionage?  Will hackers target your organization in retaliation to country X expelling ambassadors from country Y?

3. Be aware of internal events

What is happening inside the organization?  Is there a new desktop load being deployed?  Who is being fired today?  What are the upcoming mergers/acquisitions?  All of these affect the exposure surface of an organization and it’s target profile to attackers.

4. Find and follow the best

The internet is the greatest collection of human knowledge ever assembled.  Use it.  There are great security researchers and analysts constantly updating information sources with critical knowledge.  Find these sources and follow them.  Use Twitter, Google Reader, Listorious, and other sources to help aggregate this information.  Who/What are the critical sources following?

5. Be aware and able to communicate what is missing

Know what is missing from your viewpoint.  Are there any data feeds which would add to the picture?  What are the biases and limitations of your data sets?  How do these affect your decision-making?  Knowing this in advance and taking it into account will help reduce poor decision-making and unexpected consequences.

6. Know the rule sets, analytics, and data sources

The better an analyst knows their own rule-sets, analytics, and data sources, the more efficiently and accurately they can distinguish critical from non-critical events.

7. Eliminate Useless Information

One must carefully balance the need for information with the danger of information overload which will cause poor or delayed decision-making.  Therefore, eliminate any useless information sources.  This includes high false positive hitting signatures, network activity graphs which nobody pays any attention to.  It is better to have less information of higher quality than high quantity which muddles decision-making.  Replace bad data feeds with something useful, or better yet don’t replace them at all.

8. Not Everyone Requires the Same Information

It is important for organizations to understand that everyone does not need the same information to maintain situational awareness.  People think differently.  Use that to your advantage.  Don’t try to make robots.  People perceive their environment differently from one-another.  Allow each to develop their own information feeds and visualizations to maximize effectiveness.

Don’t Hold Developers Blameless for Security Woes – Security Must Begin In Development

“WordPress is a very popular platform for attackers to target,” he said. “There’s not a whole lot WordPress can do if people neglect to update their WordPress or plug-in software.”  Roel Schouwenberg, senior researcher for Kaspersky Lab

SC Magazine recently published a summary of the Kaspersky report identifying WordPress as the attack vector for intruders to establish infrastructure to spread the Flashback/Flashfake OSX Trojan.  The article included quotes from Roel Schouwenberg, senior researcher for Kaspersky Lab.

Mr. Schouwenberg (@schouw), as seen in the above quote, held the WordPress developers effectively blameless for their product being used as the primary infection vector for a 600,000+ member botnet – about 2% of all Mac OS installations.

Don’t get me wrong, if you choose to host your own publicly facing service on the Internet (such as this one) you have an ethical duty to ensure that it is not being used for evil to the best of your abilities.  This includes regular updates and patching.

Additionally, Internet users share responsibility in ensuring the security of their own machines to the best of their ability.  This includes running effective Anti-Virus (yes, I’m talking to you OSX users) and patching software (such as Java Virtual Machine exploited by this trojan).

However, neither the operators of blogs (such as myself) or the end-users of computers (again, such as myself) cannot be expected to reverse engineer every piece of software or service that we execute to search for vulnerabilities and then patch those.  That is the sole responsibility of the software developer.  Therefore, they hold at least some of the blame.

Additionally, patching is almost always a losing security proposition. This is because the patch will almost always be in response to a known vulnerability.  Which is why zero-day exploits are still so effective.

The answer is to write secure software in the beginning.  I’m not saying that all software should be, or will be, free of vulnerabilities – I have written over 1,000,000 lines of code myself and I understand fully what it means to write secure code and the incredible, and maybe impossible, challenge that entails.  But, we must begin to hold software developers more accountable for vulnerabilities rather than placing all the blame on the users.  The developers are the only ones who can effectively preempt the attackers by preventing vulnerabilities in the first place and requiring a third-party plug-in architecture (such as in WordPress) to be secure as well (i.e. using sandboxes).

Microsoft has done a tremendous job in this endeavor.  It hasn’t been easy but over the last 10 years they have made significant progress as shown by the reduced vulnerability and exposure surface of their software as well as third-party software running on their operating system.  It can be, and should be, done.

 

The Science of Intrusion Analysis and Incident Response: Introduction

[important]This is the first of several posts in a series expanding on how to turn intrusion analysis into a science.  Subscribe to the blog via email, follow us on twitter or like us on Facebook to keep-up!  [/important]

Previously I wrote about the Art of Intrusion Analysis and how I thought that Michelangelo’s quote was the best representation of how intrusion analysts arrive at knowledge.

However, my concern is not to document the art of Intrusion Analysis or Incident Response, but rather to transform the art into a science.  What does that mean?  What is the science of intrusion analysis and incident response?

First, we must define science (there are many definitions, this one will suffice for our purposes).

Science (from Latinscientia, meaning “knowledge”) is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe. — Wikipedia

Second, how will we know when intrusion analysis and incident response have become a science?  Aristotle can give us the answer.

[A] man knows a thing scientifically when he possesses a conviction arrived at in a certain way, and when the first principles on which that conviction rests are known to him with certainty—for unless he is more certain of his first principles than of the conclusion drawn from them he will only possess the knowledge in question accidentally.  — Aristotle (384 BC – 322 BC) in Nicomachean Ethics

From this I draw the following requirements to make intrusion analysis and incident response into a science:

  1. Intrusion Analysis and Incident Response must be systematic
  2. There must be first principles upon which hypotheses and predictions can be drawn and tested with experimentation
  3. There must be an organizing function to build knowledge
  4. There must be a set of theories which are generally accepted, testable, and repeatable following from first principles and hypotheses

Why do we care if Intrusion Analysis is a science or not?  An intrusion analysis and incident response science means less duplication of effort solving the same problems and a more cohesive approach to improving tools, tradecraft, and training.

Thanks to Richard (@taosecurity and Tao Security Blog) for the unanticipated use of his image! 🙂

 

Powered by WordPress & Theme by Anders Norén