Always A Bad Day For Adversaries

Author: Sergio Caltagirone Page 4 of 5

Sergio is the head of threat intelligence analysis at Microsoft and operates a global threat intelligence mission to discover, understand, track, and disrupt malicious activity against Microsoft and its customers. He is passionate about empowering defenders with timely and accurate information and moving information security from a reactive to a proactive posture.

Before Microsoft he worked for the United States Government for 8+ years and built and led several threat intelligence missions.

Sergio grew up in Western Washington State and attended the University of Portland where he received his Bachelor of Science in Computer Science and also a degree in Theology with a strong liberal arts background. He went on to the University of Idaho in 2005 where he received his Master of Science in Computer Science. At Idaho Sergio expanded his education by becoming the first computer science student allowed to take Law classes where he focused on legal topics connected to computer security.

Sergio has been very active in research and innovation receiving his first patent working with cognitive psychologists on graphical passwords (US20100169958) and had over 12 publications and a thesis on the topic of Active Response. He has gone on to work in several organizations doing computer and network security, forensics, and intrusion analysis.

Sergio is also the Chief Scientist of The Center for Cyber Intelligence Analysis and Threat Research working towards the goal of moving cyber from an art to a science.

Sergio Caltagirone+

A Hacker's Hierarchy of Operational Needs

A Hacker’s Hierarchy of Needs

A Hacker's Hierarchy of Operational Needs

A Hacker’s Hierarchy of Operational Needs

Maslow's Hierarchy of Needs

Maslow’s Hierarchy of Needs

All humans have a basic set of needs which they work to satisfy – as described by Maslow in his seminal, “A Theory of Human Motivation.”  Maslow did not create a true hierarchy.  He describes how there are sometime competing and/or complementary needs.  Instead of a strict hierarchy, these needs form dominating preferences or priorities.

It made me question whether there was a cyber operational equivalent: a set of hierarchical needs or requirements necessary for the adversary/hacker to meet their goal.  Like Maslow, I do not believe that this hierarchy is necessarily serial in nature but rather inform priorities and dominate preferences.  Nor do I believe that they must necessarily be satisfied in order, serially.

For instance, a hacker may create a capability and then sell that capability, or their skills to use the capability, to an organization thereby gaining funding for the rest of the operation.  However, while the capability was the first achieved in the chain, it was a vehicle to achieve a more base need: funding.

 


Basic Necessities: Obviously those things which allow a person to live and work effectively

Funding: Even the most basic funding is required for equipment (computer(s)) and/or purchasing other things like connectivity to the Internet and the like.

Connectivity: A hacker must be connected to a network to which s/he can reach potential targets

Target Vulnerabilities: A hacker must have a set of vulnerabilities and exposure upon which they can exploit to achieve their goals

Capabilities/Infrastructure: I believe these are both equally important but both are a requirement for operations – the capability to achieve their effect, and the infrastructure to deliver the capabilities to the target victims

Targets: A hacker must have a one or more targets of which they can use to achieve their intent

Access: A hacker must have access to the target to achieve any effects and ultimately achieve a positive outcome

Outcome: The successful exploitation, attack, etc. of which was the entire intent of the hacker

Reward: The reward for their successful operation (fame, fortune, notoriety, etc.)


So, what do you think?  Do they map to your understanding of the hierarchy for the operational needs of a hacker? How would you use this model?

 

5 Intrusion Analysis Ideas in 10 Minutes

Here are 5 cyber security ideas to improve your analysis and understanding which will take no more than 10 minutes of your time.

1. Inspect all events with a sliding scale – Good, Suspicious, Bad

An analytic mind-set should move as evidence is uncovered

One of the easiest, and worst, mistakes an analyst can make is to be too firm in their judgement.  I train analysts, and myself, to use a freely sliding scale when inspecting events, packets, and binaries.  This scale moves between known good, suspicious, and known bad as uncovered evidence supports a “goodness” or “badness” final judgement.

It is natural to come to premature conclusions when analyzing data.  Many preach against this.  I have never known a perfectly objective human.  This discounts our naturally occurring and helpful ability to make quick judgments and drive our desire for more data and evidence.  Instead, we should preach against the analyst who is hasty in a final judgement and unwilling to accept and synthesize new evidence in either direction.

2. Be willing to accept suspicious

There will be many times when after hours or days of work and collaboration the best judgement is that the event, packet, log entry, or binary, etc. is still not known as either “good” or “bad.”  An analyst must be willing to accept this unknown middle ground of “suspicious” where final judgement is elusive.  There will be times when there is not enough evidence either way nor is it likely more evidence will be uncovered (e.g. that purged critical log file, the company will not provide a missing piece of information, etc.).  Be willing to accept suspicious as an answer and reject the pressure to render a final judgement of good or bad.

However, it is important that an analyst is willing to render an informed judgement to decision makers as to where, on the scale, the event lies and what evidence supports that judgement – and more importantly, what evidence supports a contrary judgement.

3. Goodness Outweighs Badness

Some of the best cyber security analysts I have known have been network engineers and system administrators – those that best understand how systems and users actually work rather than relying on the hypothetical or documentation.  This is because the majority of network activity is good/valid versus bad.

The most valuable skill an intrusion analyst can have is to quickly and accurately identify valid activity and separate the non-valid/malicious/bad activity from the pile.  My number one recommendation to upcoming intrusion analysts is not just focus on courses and materials which teach intrusion techniques (e.g. SANS) but to spend an equal amount of time on the RFC‘s and other training opportunities which teach the valid operation and administration of modern networks and hosts.

4. Counter-Intelligence is our closest domain partner

Of all the domains I have studied to further my exploration of intrusion analysis and cyber security it is counter-intelligence which I have found to offer the most insight and parallels to this domain.  Others may argue with this but counter-intelligence works in a domain where there is an assumed compromised environment and the focus is primarily on detection and damage limitation when compromise occurs.

Of course, counter-intelligence necessarily breeds paranoia – but that is also a good quality in an intrusion analyst, when balanced with the right amount sanity 🙂

5. Document everything and don’t get lost in the “rabbit hole”

In the pursuit of an activity with the gathering of evidence and shifting judgments and hypotheses, things can move quickly.  When conducting intrusion activity, document everything – even if it seems irrelevant – you never know when a case will hinge on a small, originally, over-looked detail.   In this documentation also record all open questions and hypotheses so that when “going doing the rabbit hole” of analysis towards one hypothesis other lines of analysis are not forgotten or discounted without effective evidence gathering.

What is ‘Cyber’?

Recently, a very amusing website launched to ask a very simple question, “will using the prefix cyber make me look like an idiot?”  It predicated the response based on an answer to three questions: (1) Are you a science fiction author, (2) are you about to engage in dirty instant messaging, and (3) are you using the word to engage in scare mongering?  You can see the answer to my questions below based on my everyday usage of the word:

The site is obviously established to poke fun at the growing use of the word cyber to describe many subjects and items.  There are many in the computer security/information assurance field which agree with that premise and openly disagree with it’s use in any form outside of science fiction or dirty instant messaging.

I come from a background in academia and research.  I understand the importance of word choice and usage.  However, I am also aware of the need to adopt a new lexicon when an existing one is not enough.  I believe this is one of those cases.

I too used to abhor the use of the word cyber in the computer security/information assurance/network security domains.  However, as I matured in my understanding of the topic beyond the technical concepts of these fields and into the human factors and psychology of the field I knew these terms did not adequately describe the full scope of the analysis and operations to secure computer systems.

The word cyber is necessary.

It is necessary because this field is much larger than just securing technical systems.  It MUST also embrace analysis, psychology, human factors, and aggressive operations (hence the name of the blog – ActiveResponse), amongst others.

The other terms used in this area (e.g. Computer Security, Information Assurance, Network Security, etc.) are all fine and have their place.  But they lack one fundamental aspect: the human.

Cyber originated in our lexicon with Norbert Wiener in his seminal 1948 book Cybernetics or Control and Communication in the Animal and the Machine.  He took the word cyber from the Greek word  kybernetes, Greek for “steersman” or “governor.”   It was further adopted by science fiction authors into the cyberpunk and famously, cyberspace (by William Gipson).

Faced with the origin of the word, it has not been co-opted.  In fact, I believe it is a better term than others in many instances.  Primarily because it makes humans and operators the central focus of the activities we study – either their offensive exploitation of systems or our defensive reaction or preventative actions.  It is all done because computers are tools for humans to operate more effectively in any number of areas.  They have no inherit value outside of use by humans.  Many of us technical geeks forget that while we are digging into packets or studying architecture diagrams.

Therefore, I will keep using the word cyber proudly knowing that I am using it to keep the human as the central concept in intrusion analysis, information assurance, computer security, network security, or whatever else you want to define to enable humans to use information and communicate more effectively.

United We Stand, Divided We Are Falling: Are Security Collectives a More Effective Model?

Anti-virus is a multi-billion dollar industry and is a necessary best-practice and due dilligence measure everyone must implement.  Yet it is failing, by its own volition, spectacularly – and only delivering the least-common-denominator prevention without any signs of improvement.  How can consumers of products, namely security products, guarantee better service and quality from the vendors?  By forming security collectives and achieving economies of scale in security.

NATO: An Example of Collective Security

Mahatma Ghandi was a strong man, but it took the collective work of thousands making salt during the Salt March to make a statement.  One person in need of high quality private insurance in the U.S. does not make a market, but millions coming together into health groups can leverage their collective purchasing power to deliver the necessary health plan at the right price.  Political parties band like-minded people together to set policy and government agendas.  Governments come together for collective benefit (e.g. NATO).

Collectives can make markets, demand services, set prices, deliver effective products, and change industries.  They can leverage their group funding and influence to get what they need where individually they cannot.

One of the most difficult aspects of achieving information assurance and network security is that we rely, almost exclusively, on third-party vendors and service providers to achieve our goals – to deliver the technology, to block the right packets, to prevent the wrong applications from installation, to write secure code for their applications, etc.  These come in the form of application vendors, anti-virus vendors, intrusion detection/prevention platforms, firewall systems, network infrastructure, etc.

Yet these vendors are simply delivering the least-common-denominator in security.  They are building technology and leveraging intelligence only enough to solve the problems for the greatest number – not solving the right problems for their customers.

Mikko Hypponen, recently openly stated that they fail at their job.

“The truth is, consumer-grade antivirus products can’t protect against targeted malware created by well-resourced nation-states with bulging budgets. They can protect you against run-of-the-mill malware: banking trojans, keystroke loggers and e-mail worms.” — Mikko Hyponen in Why Antivirus Companies Like Mine Failed to Catch Flame and Stuxnet appearing in Wired June 2012

It is rare that a company, or an entire industry, will admit their faults publicly.  What Mikko says is true.  Anti-virus is designed to protect their customers from the most common threats – but fail at protecting their customers against the most advanced threats.

How then can customers achieve the assurance necessary to operate when the products they rely on most to protect them cannot?

Today: we define our security objectives alone, individually contract with security vendors for products (e.g. anti-virus, intrusion detection systems), and then conduct incident response and intrusion analysis in a vacuum.

What if organizations were to band-together with like-mission partners into security collectives (military with military, critical infrastructure with critical infrastructure, healthcare with healthcare, etc.).  Thereby achieving economies of scale in shared threat intelligence and greater vendor support through collective bargaining.

Is the anti-virus product not delivering the protection you (and your partners in the group) need?  Then take all of your money elsewhere to a vendor that will offer the level of service necessary.  Maybe you need a hunting service for the most advanced adversary.  Maybe you need advanced threat intelligence utilization.  Maybe you need just enough support to keep the next worm from infecting your network.

With security collectives, all customers need not be satisfied with the least-common-denominator of security.

Money talks, and big money talks even louder.  Threat intelligence is nice, shared threat intelligence is stronger.  We need to band together into collectives to make our voices heard and demand better service based on our respective missions.

Security Must Not Forget the User

Hotel Internet Network Intrusion Detection System at Work

I received this message from my hotel Internet provider which took action to limit my access for 10 minutes to 56Kbs due to some unknown intrusion detection signature/heuristic.

I was both impressed that a hotel would have such a device in-line to protect the general Internet from aggressive and potentially damaging users and angered by the punitive action taken against my innocuous activity.

Like a well-trained security professional, I immediately took action to mitigate any damage to my system from unwanted malware.   I ran my security tools (anti-virus, software updates,  spyware/adware removal, etc.).  After those did not find anything I assumed a false positive, sucked up the slow Internet, went and read a book, and then returned to my ‘blazing fast’ 2Mbs Internet access.

It was not an hour later that I again received the message and punitive action.  This time I ran Wireshark and sniffed all the traffic to/from my laptop for the next hour and analyzed the output.  I found nothing of interest.  I was now convinced this was a false positive after using my years of security knowledge and forensic ability and finding nothing suspicious on my laptop.  (I am not going to assume there was nothing, but I can only go as far as I can).

Now I was just upset.  We in security like to think of ourselves as more knowledgeable than the average user about threats and mitigations.  We can find threats they cannot and we can furthermore mitigate those threats for them without their knowledge protecting them on the front-lines (e.g. Gateway, ISP, etc.).

However, we must also remember that computing systems are here for users – that is their entire purpose, to ultimately provide a benefit to human users.  Therefore, security must always take the user into account and include them whenever possible.

Security must begin and end with the user.  This means that when security is first envisioned it must understand the purpose of the system and the needs of a user.  If security were to make a system unusable, then there is no purpose in the system even existing and hence our existence as security professionals is questioned.  Second, this means that users must be included when possible in the security cycle.

We must help users help themselves!  Messages such as the one above (e.g. “There is a problem with your system”) do no good.  It does not help solve any problem.  It actually makes the problem worse because now the user must spend time trying to fix a problem that may or may not exist.  http://www.frontend.com/design/effective-error-messages.html

Second, it does not inform, increase the knowledge of, or educate the user in any way.  This message did not inform the specific detection (e.g. signature/heuristic), suggest effective mitigation, nor provide a suggested severity of the threat.

[important]We in the security community need to better incorporate human factors/user interface knowledge into security and integrate the user from the beginning to the end of our security engineering.[/important]

 

8 Tips for Maintaining Cyber Situational Awareness

 

Situational awareness is the perception of your environment and comprehending the  elements within that environment with particular focus on those critical to decision making.

Cyber defenders, operators, and analysts must maintain “situational awareness.”  This is more than sitting in a room with several large televisions streaming Twitter and [insert management’s favorite cable news channel here].

Maintaining situational awareness is the act of continuously defining your environment and identifying (and comprehending) elements critical to decision-making.  The purpose of this act is so that one can continuously orient towards the best decision.

Those familiar with the OODA Loop will recognize this as the observe phase in the loop.

It is important to know and comprehend your environment, which means both your internal situation AND the external situation.

Knowing your internal situation usually comes with dashboards, alerts, network activity graphs, parsing log files, vulnerability scanners, updates from vendors, etc.  From this view an analyst finds particularly interesting events or anomalies and understand their organization’s exposure surface.

Most importantly, the situational awareness from these data points should provide a decision-making construct to identify necessary actions (e.g. “should we patch for that?”, “should we close that firewall hole?”, “should I explore that spike in traffic?”).

However, maintaining knowledge of the internal situation is not enough.  Just as a pilot must keep their eyes on their instruments AND the horizon an analyst must keep their eyes on their internal sensors AND the external threat environment.

Keeping track of just ONE of these environments is hard enough, how can an analyst hope to track both environments effectively,  make effective decisions on that information, and act on those decisions on time?

Both management and analysts dream of some tool that will quickly and easily integrate these disparate and complicated environments simply to make the best decisions quickly.  However until that dream tool is created:

1. Know your organization’s mission statement, business strategy, and business rules

You’ll never know what elements or events are important if you don’t know what is important to your organization.  Be able to articulate your organization’s mission statement.  How is your organization attempting to meet its goals and how do you support that?  How do the various business units work together to create cohesive whole?  With this information you can make an informed decision as to the criticality of an event based on the assets being affected.

2. Be cognizant of external events affecting your organization’s mission

What is happening in your market space or global sociopolitical space which is changing your security profile?  Will that new acquisition by a foreign competitor cause you to become a target of corporate espionage?  Will hackers target your organization in retaliation to country X expelling ambassadors from country Y?

3. Be aware of internal events

What is happening inside the organization?  Is there a new desktop load being deployed?  Who is being fired today?  What are the upcoming mergers/acquisitions?  All of these affect the exposure surface of an organization and it’s target profile to attackers.

4. Find and follow the best

The internet is the greatest collection of human knowledge ever assembled.  Use it.  There are great security researchers and analysts constantly updating information sources with critical knowledge.  Find these sources and follow them.  Use Twitter, Google Reader, Listorious, and other sources to help aggregate this information.  Who/What are the critical sources following?

5. Be aware and able to communicate what is missing

Know what is missing from your viewpoint.  Are there any data feeds which would add to the picture?  What are the biases and limitations of your data sets?  How do these affect your decision-making?  Knowing this in advance and taking it into account will help reduce poor decision-making and unexpected consequences.

6. Know the rule sets, analytics, and data sources

The better an analyst knows their own rule-sets, analytics, and data sources, the more efficiently and accurately they can distinguish critical from non-critical events.

7. Eliminate Useless Information

One must carefully balance the need for information with the danger of information overload which will cause poor or delayed decision-making.  Therefore, eliminate any useless information sources.  This includes high false positive hitting signatures, network activity graphs which nobody pays any attention to.  It is better to have less information of higher quality than high quantity which muddles decision-making.  Replace bad data feeds with something useful, or better yet don’t replace them at all.

8. Not Everyone Requires the Same Information

It is important for organizations to understand that everyone does not need the same information to maintain situational awareness.  People think differently.  Use that to your advantage.  Don’t try to make robots.  People perceive their environment differently from one-another.  Allow each to develop their own information feeds and visualizations to maximize effectiveness.

Don’t Hold Developers Blameless for Security Woes – Security Must Begin In Development

“WordPress is a very popular platform for attackers to target,” he said. “There’s not a whole lot WordPress can do if people neglect to update their WordPress or plug-in software.”  Roel Schouwenberg, senior researcher for Kaspersky Lab

SC Magazine recently published a summary of the Kaspersky report identifying WordPress as the attack vector for intruders to establish infrastructure to spread the Flashback/Flashfake OSX Trojan.  The article included quotes from Roel Schouwenberg, senior researcher for Kaspersky Lab.

Mr. Schouwenberg (@schouw), as seen in the above quote, held the WordPress developers effectively blameless for their product being used as the primary infection vector for a 600,000+ member botnet – about 2% of all Mac OS installations.

Don’t get me wrong, if you choose to host your own publicly facing service on the Internet (such as this one) you have an ethical duty to ensure that it is not being used for evil to the best of your abilities.  This includes regular updates and patching.

Additionally, Internet users share responsibility in ensuring the security of their own machines to the best of their ability.  This includes running effective Anti-Virus (yes, I’m talking to you OSX users) and patching software (such as Java Virtual Machine exploited by this trojan).

However, neither the operators of blogs (such as myself) or the end-users of computers (again, such as myself) cannot be expected to reverse engineer every piece of software or service that we execute to search for vulnerabilities and then patch those.  That is the sole responsibility of the software developer.  Therefore, they hold at least some of the blame.

Additionally, patching is almost always a losing security proposition. This is because the patch will almost always be in response to a known vulnerability.  Which is why zero-day exploits are still so effective.

The answer is to write secure software in the beginning.  I’m not saying that all software should be, or will be, free of vulnerabilities – I have written over 1,000,000 lines of code myself and I understand fully what it means to write secure code and the incredible, and maybe impossible, challenge that entails.  But, we must begin to hold software developers more accountable for vulnerabilities rather than placing all the blame on the users.  The developers are the only ones who can effectively preempt the attackers by preventing vulnerabilities in the first place and requiring a third-party plug-in architecture (such as in WordPress) to be secure as well (i.e. using sandboxes).

Microsoft has done a tremendous job in this endeavor.  It hasn’t been easy but over the last 10 years they have made significant progress as shown by the reduced vulnerability and exposure surface of their software as well as third-party software running on their operating system.  It can be, and should be, done.

 

The Science of Intrusion Analysis and Incident Response: Introduction

[important]This is the first of several posts in a series expanding on how to turn intrusion analysis into a science.  Subscribe to the blog via email, follow us on twitter or like us on Facebook to keep-up!  [/important]

Previously I wrote about the Art of Intrusion Analysis and how I thought that Michelangelo’s quote was the best representation of how intrusion analysts arrive at knowledge.

However, my concern is not to document the art of Intrusion Analysis or Incident Response, but rather to transform the art into a science.  What does that mean?  What is the science of intrusion analysis and incident response?

First, we must define science (there are many definitions, this one will suffice for our purposes).

Science (from Latinscientia, meaning “knowledge”) is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe. — Wikipedia

Second, how will we know when intrusion analysis and incident response have become a science?  Aristotle can give us the answer.

[A] man knows a thing scientifically when he possesses a conviction arrived at in a certain way, and when the first principles on which that conviction rests are known to him with certainty—for unless he is more certain of his first principles than of the conclusion drawn from them he will only possess the knowledge in question accidentally.  — Aristotle (384 BC – 322 BC) in Nicomachean Ethics

From this I draw the following requirements to make intrusion analysis and incident response into a science:

  1. Intrusion Analysis and Incident Response must be systematic
  2. There must be first principles upon which hypotheses and predictions can be drawn and tested with experimentation
  3. There must be an organizing function to build knowledge
  4. There must be a set of theories which are generally accepted, testable, and repeatable following from first principles and hypotheses

Why do we care if Intrusion Analysis is a science or not?  An intrusion analysis and incident response science means less duplication of effort solving the same problems and a more cohesive approach to improving tools, tradecraft, and training.

Thanks to Richard (@taosecurity and Tao Security Blog) for the unanticipated use of his image! 🙂

 

20 Questions for an Intrusion Analyst

There are many approaches to finding the right people with the right talent to solve problems.  Intrusion analysis and incident response is no different.

I recently saw a great recruiting quiz to test potential employees in various knowledge areas which included programming, packet analysis, protocol analysis, snort rule writing, reverse engineering, data encoding, advanced mathematics, and other topics.  The test was designed so that it crossed so many topics one person would likely not successfully complete it.  However, it would highlight a person’s strengths and interests to give the assessor a more complete picture of the applicant.

This made me think, what topics and questions would I use to achieve the same effect?   After some deliberation, I have developed my own “20 Questions for an Intrusion Analyst” recruitment quiz (below) to highlight areas I think are important about a potential analyst joining a team.

As you may notice, I have covered several areas with these questions: analytic reasoning, creativity, adversary operations, packet analysis, intrusion detection, programming, reverse engineering, vulnerability analysis, exploit writing, and teaming.

I am purposefully not providing the answers 🙂

20 Questions for an Intrusion Analyst

  1. Describe you first experience with a computer or network threat
  2. You are given 500 pieces of straw and told that one piece is a needle which looks like straw.  How would you find the needle?  What other pieces of information would you like to have?
  3. Explain the difference between intrusion and extrusion detection
  4. Describe an adversary pivot, give an example, and explain its importance to intrusion analysis.
  5. Describe your analytic biases.
  6. Use the bit string 1101 to answer the following questions:
    1. The bit string when XORed with 0
    2. The decimal value of the string
    3. The string represented in hexadecimal
    4. Does this represent a printable ASCII character?  If so, which character?
  1. What is your favorite intrusion detection system?  What are its biases and limitations?
  2. Circle any of the following films you have seen: Hackers, War Games, Sneakers, Tron
  3. Describe a method to find an intruder using only network flow data (no content).
  4. Explain insertion and evasion of intrusion detection systems.  Give an example.
  5. Describe the activity detected by the following Snort rule.  What could be done to make the rule more effective?   alert icmp $EXTERNAL_NET any <> $HOME_NET any (msg: “activity alert!”; sid:10000011; content:”MZ”;)
  6. Write a code snippet to sort the following data by the first column
10,bob
8,sally
2,suzy
3,billy
5,joey
  1. How much time/week do you spend on your own researching computer security/threat topics?  What sources do you use to maintain situational awareness on threats in the wild?
  2. What will the following code print out?  Is there a vulnerability in the code?  If so, describe the vulnerability and a potential method of exploitation.
#include
#include
int main(int argc, char *argv[])
{
   char string[40];
   strcpy(string, argv[1]);
   printf("The message was: %s\n", string);
   printf("Program completed normally!\n\n");
   return 0;
}
  1. Describe and explain any “interesting” entries in the netstat log:
Proto Local Address     Foreign Address    State
 TCP  0.0.0.0:53        0.0.0.0:0          LISTENING
 TCP  0.0.0.0:135       0.0.0.0:0          LISTENING
 TCP  0.0.0.0:445       0.0.0.0:0          LISTENING
 TCP  0.0.0.0:5357      0.0.0.0:0          LISTENING
 TCP  192.168.1.4:53    91.198.117.247:443 CLOSE_WAIT
 TCP  192.168.1.4:59393 74.125.224.39:443  ESTABLISHED
 TCP  192.168.1.4:59515 208.50.77.89:80    ESTABLISHED
 TCP  192.168.1.4:59518 69.171.227.67:443  ESTABLISHED
 TCP  192.168.1.4:59522 96.16.53.227:443   ESTABLISHED
 TCP  192.168.1.4:59523 96.16.53.227:443   ESTABLISHED
 TCP  192.168.1.4:53    208.71.44.30:80    ESTABLISHED
 TCP  192.168.1.4:59538 74.125.224.98:80   ESTABLISHED
 TCP  192.168.1.4:59539 74.125.224.98:80   ESTABLISHED
  1. A host sends out an ICMP ECHO REPLY packet.  List all of your hypotheses to explain this activity.
  2. Describe the protocol stack of the following packet and the payload. Is the packet legitimate? Why or why not?
0000  00 00 c0 9f a0 97 00 a0 cc 3b bf fa 08 00 45 10   .........;....E.
0010  00 89 46 44 40 00 40 06 72 c7 c0 a8 00 02 c0 a8   ..FD@.@.r.......
0020  00 01 06 0e 00 17 99 c5 a1 54 17 f1 63 84 80 18   .........T..c...
0030  7d 78 cc 93 00 00 01 01 08 0a 00 9c 27 34 00 25   }x..........'4.%
0040  a6 2c ff fa 20 00 39 36 30 30 2c 39 36 30 30 ff   .,.. .9600,9600.
0050  f0 ff fa 23 00 62 61 6d 2e 7a 69 6e 67 2e 6f 72   ...#.bam.zing.or
0060  67 3a 30 2e 30 ff f0 ff fa 27 00 00 44 49 53 50   g:0.0....'..DISP
0070  4c 41 59 01 62 61 6d 2e 7a 69 6e 67 2e 6f 72 67   LAY.bam.zing.org
0080  3a 30 2e 30 ff f0 ff fa 18 00 78 74 65 72 6d 2d   :0.0......xterm-
0090  63 6f 6c 6f 72 ff f0                              color..
  1. What type of encoding is used in this example: aGVsbG8gd29ybGQNCg==
  2. Who do you turn to most on technical questions?

You didn’t expect the 20th question to be here did you?  You should expect the unexpected by now.

Protect All Information Completely? Expect Exploitation Instead

Packet Pushers recently published an opinion titled “Pill-Chomping Hackers and Security Whack-a-Mole.”  There are several very good points.

All information about a target is a potential vulnerability

Information is helplessly entangled and one piece of innocuous information can lead to other pieces of critical information

Information is only as secure as those protecting it

There is one point worth re-iterating: when you share your data (whether it is your social security number at a medical office or your credit card number at a restaurant) it is only as secure as the security of those holding it.  In essence, both organizational and personal security must expand the boundaries to include anywhere their information is held.

However, there is one point I would like to argue, the implication that all data must be secured because it is a vulnerability.  It is not possible to protect all data equally.  A data owner must place different values on different datum and protected it appropriately.

Second, hiding all of your most critical data using in the most secure method still does guarantee security. Instead of attempting to build the best security controls and assume they work, it is better to protect your data as well as possible and then assume you will be exploited.

Don’t just protect the data, one must watch for signs of exploitation and prevent further exploitation.  In the case of a social security number in the real-world, if one assumes the theft and misuse of the number then it is best to watch for further misuse (e.g. unauthorized new lines of credit being opened, activity on credit cards, etc.).

Furthermore, reduce loss.  If at all possible, make sure that any compromise is as insignificant as possible.  In the real-world, it is best to reduce password re-use so that if a password to one application or website is compromised, not all of your passwords have been compromised.

Yes, protect your data as best as you can, but assume it adversaries are out to exploit you – and they will be successful.

Page 4 of 5

Powered by WordPress & Theme by Anders Norén