Always A Bad Day For Adversaries

Category: Defense

Enabling Skype Two Factor Authentication

Skype is an important communications medium.  This importance also lends itself to targeting by adversaries.  Two factor authentication is one of the most important defensive actions you can take to prevent successful attacks.  Therefore, you should enable two-factor authentication for Skype!

However, this is not straightforward.  Here’s how you do it.  THE THIRD STEP IS THE MOST IMPORTANT – otherwise an adversary can bypass two-factor authentication by logging in via the Skype name still.

  1. Link Skype and Microsoft Account
  2. Enable two-factor authentication for the associated Microsoft account
  3. Disable login via Skype username (via this sign-in preference panel)

4 Qualities of Defensible Products – Secure Products Are Not Enough

For decades the industry worked to build secure products: products which can withstand attacks usually by reducing vulnerabilities and exposures.

However, what happens when that fails and an attack is successful in spite of the work done to secure the product?  I propose that we require both secure products AND defensible products; products which not only resist attacks but successfully defended when attacks bypass protection.

4 Qualities of Defensible Products

  1. Visibility – the visibility necessary to detect unauthorized use and malicious attacks
  2. Transparency –  the transparency into the product’s operations to conduct a proper investigation and response after detection
  3. Controls –  the controls necessary to remediate a threat after detection and investigation
  4. Resilience – a product returns to an working state quickly after remediation (or remain operational during an attack)

Snakes and Ladders: How Intrusion Analysis and Incident Response is Like a Board Game and the Critical Role of Pivoting

Pivoting is, in my humble opinion, the most important skill of intrusion analysis and incident response.  I have been teaching/training/mentoring intrusion analysts for over 7 years.  In my experience, this is the most difficult skill to train as it requires creativity, attention to detail, and a full knowledge of their data sources and how to exploit those.

Pivoting is the ability to identify a critical piece of information and being able to maximally exploit that information across all of your sources to substantially increase your knowledge of the adversary and further identify the next critical piece of information – which is then pivoted upon moving you deeper into the operation of the adversary – hopefully earlier into the kill-chain.

An example: trolling through log files you discover a very odd HTTP user-agent accessing your website.  You then query on this user-agent across all the log entries and identify a significant number of users providing this string value.  (Pivot on user-agent) You then extract all of those particular log entries and identify a regular time pattern of access indicating automated functionality.  You also discover that all the requests have a very odd resource/page request – bob.php.  (Pivot on bob.php) You then take that page name (bob.php) and examine all HTTP traffic in your network over the last 2 days and discover that several hosts in your network have been POSTing odd data to bob.php….at this point you may retrieve and conduct a forensic analysis on the hosts, etc.  When you finally discover that the adversary has compromised several internal hosts and has had them HTTP POSTing data to a webpage on your external-facing website of which the adversary then uses to extract the information/data.  At this point, you now have several pieces of mitigative value: the source IP of the adversary’s infrastructure on the outside, the page deposited on your website, any malicious tools discovered on the hosts, the HTTP traffic, etc.  All of which are collectively more valuable to defense than any one of those pieces of information independently.

 

A Step Function

In this way, analysis and incident response is a step-function.  Most of the time analysis is, in a sense, rote.  It involves looking through log files, examining and validating alerts, looking at various binaries.  Step by step peeling back the onion of the adversary’s operations.  At times we even move backwards as an analyst makes an incorrect assumption or a poor hypothesis which costs time/money/resources to recover and correct the analytic path.  However, when a piece of critical information is discovered it should be exploited and a deeper knowledge should be achieved moving the analysis to a “new level” of the function substantially increasing the knowledge as a whole – which, in theory, should lead to additional mitigative opportunities.

 

Chutes and Ladders

My favorite analogy is that of the game of “Chutes and Ladders” (or “Snakes and Ladders” for those outside the US).  A player slowly moves across the board block-by-block but then happens on a ladder which moves them up substantially in the board.  Other times, they land on a snake/chute which then brings them back down.  This is the process of analysis.

Why does this matter?  It matters because this understanding can help us better understand the process and model of analysis thereby providing an opportunity for researchers to target parts of analysis to increase the chances/likelihood of a step-function increase in knowledge and decrease the chance of a decrease.

One way is to increase the capability of analytic tools to maximize pivoting.  Allowing for an easy and quick way to query other data sources with a new discovery and integrating that into the analytic picture.  The tools should also allow an analyst to ‘back-up’ their analysis removing a possible poor path once an error is discovered.

This is just a couple of ideas.  I’d love to hear yours.

Security Must Not Forget the User

Hotel Internet Network Intrusion Detection System at Work

I received this message from my hotel Internet provider which took action to limit my access for 10 minutes to 56Kbs due to some unknown intrusion detection signature/heuristic.

I was both impressed that a hotel would have such a device in-line to protect the general Internet from aggressive and potentially damaging users and angered by the punitive action taken against my innocuous activity.

Like a well-trained security professional, I immediately took action to mitigate any damage to my system from unwanted malware.   I ran my security tools (anti-virus, software updates,  spyware/adware removal, etc.).  After those did not find anything I assumed a false positive, sucked up the slow Internet, went and read a book, and then returned to my ‘blazing fast’ 2Mbs Internet access.

It was not an hour later that I again received the message and punitive action.  This time I ran Wireshark and sniffed all the traffic to/from my laptop for the next hour and analyzed the output.  I found nothing of interest.  I was now convinced this was a false positive after using my years of security knowledge and forensic ability and finding nothing suspicious on my laptop.  (I am not going to assume there was nothing, but I can only go as far as I can).

Now I was just upset.  We in security like to think of ourselves as more knowledgeable than the average user about threats and mitigations.  We can find threats they cannot and we can furthermore mitigate those threats for them without their knowledge protecting them on the front-lines (e.g. Gateway, ISP, etc.).

However, we must also remember that computing systems are here for users – that is their entire purpose, to ultimately provide a benefit to human users.  Therefore, security must always take the user into account and include them whenever possible.

Security must begin and end with the user.  This means that when security is first envisioned it must understand the purpose of the system and the needs of a user.  If security were to make a system unusable, then there is no purpose in the system even existing and hence our existence as security professionals is questioned.  Second, this means that users must be included when possible in the security cycle.

We must help users help themselves!  Messages such as the one above (e.g. “There is a problem with your system”) do no good.  It does not help solve any problem.  It actually makes the problem worse because now the user must spend time trying to fix a problem that may or may not exist.  http://www.frontend.com/design/effective-error-messages.html

Second, it does not inform, increase the knowledge of, or educate the user in any way.  This message did not inform the specific detection (e.g. signature/heuristic), suggest effective mitigation, nor provide a suggested severity of the threat.

[important]We in the security community need to better incorporate human factors/user interface knowledge into security and integrate the user from the beginning to the end of our security engineering.[/important]

 

Don’t Hold Developers Blameless for Security Woes – Security Must Begin In Development

“WordPress is a very popular platform for attackers to target,” he said. “There’s not a whole lot WordPress can do if people neglect to update their WordPress or plug-in software.”  Roel Schouwenberg, senior researcher for Kaspersky Lab

SC Magazine recently published a summary of the Kaspersky report identifying WordPress as the attack vector for intruders to establish infrastructure to spread the Flashback/Flashfake OSX Trojan.  The article included quotes from Roel Schouwenberg, senior researcher for Kaspersky Lab.

Mr. Schouwenberg (@schouw), as seen in the above quote, held the WordPress developers effectively blameless for their product being used as the primary infection vector for a 600,000+ member botnet – about 2% of all Mac OS installations.

Don’t get me wrong, if you choose to host your own publicly facing service on the Internet (such as this one) you have an ethical duty to ensure that it is not being used for evil to the best of your abilities.  This includes regular updates and patching.

Additionally, Internet users share responsibility in ensuring the security of their own machines to the best of their ability.  This includes running effective Anti-Virus (yes, I’m talking to you OSX users) and patching software (such as Java Virtual Machine exploited by this trojan).

However, neither the operators of blogs (such as myself) or the end-users of computers (again, such as myself) cannot be expected to reverse engineer every piece of software or service that we execute to search for vulnerabilities and then patch those.  That is the sole responsibility of the software developer.  Therefore, they hold at least some of the blame.

Additionally, patching is almost always a losing security proposition. This is because the patch will almost always be in response to a known vulnerability.  Which is why zero-day exploits are still so effective.

The answer is to write secure software in the beginning.  I’m not saying that all software should be, or will be, free of vulnerabilities – I have written over 1,000,000 lines of code myself and I understand fully what it means to write secure code and the incredible, and maybe impossible, challenge that entails.  But, we must begin to hold software developers more accountable for vulnerabilities rather than placing all the blame on the users.  The developers are the only ones who can effectively preempt the attackers by preventing vulnerabilities in the first place and requiring a third-party plug-in architecture (such as in WordPress) to be secure as well (i.e. using sandboxes).

Microsoft has done a tremendous job in this endeavor.  It hasn’t been easy but over the last 10 years they have made significant progress as shown by the reduced vulnerability and exposure surface of their software as well as third-party software running on their operating system.  It can be, and should be, done.

 

Protect All Information Completely? Expect Exploitation Instead

Packet Pushers recently published an opinion titled “Pill-Chomping Hackers and Security Whack-a-Mole.”  There are several very good points.

All information about a target is a potential vulnerability

Information is helplessly entangled and one piece of innocuous information can lead to other pieces of critical information

Information is only as secure as those protecting it

There is one point worth re-iterating: when you share your data (whether it is your social security number at a medical office or your credit card number at a restaurant) it is only as secure as the security of those holding it.  In essence, both organizational and personal security must expand the boundaries to include anywhere their information is held.

However, there is one point I would like to argue, the implication that all data must be secured because it is a vulnerability.  It is not possible to protect all data equally.  A data owner must place different values on different datum and protected it appropriately.

Second, hiding all of your most critical data using in the most secure method still does guarantee security. Instead of attempting to build the best security controls and assume they work, it is better to protect your data as well as possible and then assume you will be exploited.

Don’t just protect the data, one must watch for signs of exploitation and prevent further exploitation.  In the case of a social security number in the real-world, if one assumes the theft and misuse of the number then it is best to watch for further misuse (e.g. unauthorized new lines of credit being opened, activity on credit cards, etc.).

Furthermore, reduce loss.  If at all possible, make sure that any compromise is as insignificant as possible.  In the real-world, it is best to reduce password re-use so that if a password to one application or website is compromised, not all of your passwords have been compromised.

Yes, protect your data as best as you can, but assume it adversaries are out to exploit you – and they will be successful.

A New Security Accounting or How to Win Against a Formidable Adversary

Many intrusion analysts are constantly plagued by a nagging thought that we are fighting a losing battle.  The problem only gets worse, it never seems to get better.  There are only more hackers, more damage, more vulnerabilities, more exploits, more toolkits, etc.  Everyday we feel overwhelmed and under-resourced.

This feeling is not wrong.  Our instinct is correct.  We are fighting a losing battle.  There are many more adversaries than there are network defenders.  The adversary needs only one vulnerability, one exposure, or one exploit to win – while we need to find and patch all the vulnerabilities and exposures and prevent all exploits to just stay even.  We have already lost before even playing the game.

To win this battle, or bring it to a draw, we must initiate a new security accounting.  We must change our thinking.

First, we must accept loss.  We must understand that we will be penetrated and exploited.  We must focus on early detection, discovery, and the minimization of loss/mitigation.  We must not count every intrusion as a failure.  This is a game to be played over decades, not days.

Second, we must be truthful with ourselves and then truthful with others.  No more counting scans detected by the firewall as “millions of blocked intrusions.”

Third, we must stop accounting for security in terms of money/resources we have spent to secure ourselves.  It is a self-centered and foolish accounting.  We must start focusing on how much did we force the adversary to spend in money/resources to exploit our network – what was their $ per Megabyte of data stolen.  The larger we make that ratio the more secure we become: (1) we will reduce the number of adversaries operating against us because only the most resourced will be able to gain any profit from their operations, (2) we will reduce the effectiveness of the adversaries which do operate against us by increasing their costs and decreasing their gains.

Some may say that this is a losing proposition.  What about the adversary willing to spend $10 million to exploit my network and steal my intellectual property, but I can only spend $1 million to protect it?  You’re screwed.  The adversary obviously values your data more than you.  The only hope is to band together with other targets/victims to combine your forces in the hopes of creating parity with the adversary.

An analogy: if one country is willing to spend billions to create a military to defeat another country, and the target country cannot even spend millions in defense – they will likely lose.  Their only hope is to create an alliance with other countries in the hope of (1) creating an effective combined force to battle their adversary or (2) being able to pull other handles (e.g. trade/economics/etc) costing the hostile country enough to make the attack worthless.

In the end, it comes down to a relationship built on value.  As long as the adversary is making a profit (however that is defined) there is no incentive for them to stop.

There are two types of victims: victims of opportunity and victims of interest.

Victims of opportunity are victims because they were available to the adversary at the right time but possess little value.  If the adversary was to lose access they would likely not notice.  These organizations can utilize standard security practices to protect themselves reducing their likelihood of becoming a victim.  Example: a home computer infected with a botnet.

Victims of interest are victims because they possess great value to the adversary.  If the adversary were to lose access to the victim it would be noticed, and the adversary would spend resources regaining access and maintaining that access.  The adversary will not stop victimizing the organization until the relationship between adversary and victim changes and the victim no longer provides enough benefit to justify the cost of exploitation.  Example: Advanced Persistent Threats.

Therefore, a security strategy must be based on the adversary/victim relationship.  The only way to win against a formidable adversary, one in a considerably better position than yourself, is to make it too costly for them to wage war.  (NOTE: the cost will be different for each adversary, some may be sensitive to finance while others might be sensitive to jail/loss of freedom, etc.)

Why Malware Numbers Don’t Matter and What it Means for Security Accounting

McAfee recently reported over 75 million new malware samples detected in 2011. This number, while shocking, no longer matters as an absolute value. It also highlights a glaring flaw in network defense philosophy.

First, this number is only calculated from all detected hashes. Any changes in the binary results in a new, unique, hash. This means that only a small change by the adversary is necessary to effect a “new” piece of malware. A simple thought experiment: if there were 75 million malware samples, each with only one byte difference between them – this method would count 75 million “unique” pieces of malware.

Second, the number alone says nothing about the threat environment. It does not illustrate the attack vectors, vulnerabilities, or exposures used by the malware; nor does it describe the danger or effectiveness of the various malware samples. Maybe there is only one piece of malware and it’s 75 million varieties are all harmless. 75 million is now a very large number signifying nothing.

However, it does matter as a relative value showing the number of unique samples over time. For example, in 2007 unique malware samples rose 565% from the previous year [from A Brief History of Malware]. The velocity of unique malware samples detected in the wild (or the slope of the line if you prefer) is clearly increasing.

Why? It means that malware authors and operators are exploiting the primary network defense practice: default allow all – the black list. Defenders are still stuck in the “allow all” mind-set to trust everything except code which does not pass certain tests or follows certain behavior. To exploit this mind-set an adversary only has to change their malware enough to bypass these filters (e.g. AntiVirus). As defenders update their blacklists/AntiVirus/firewalls, the malware authors make a small change or re-pack and re-deploy the malware bypassing the new rules/filters/etc.

For an adversary, changing their capability slightly and re-deploying is a relatively inexpensive operation – particularly with pervasive exploit kits such as BlackHole. Whereas the cost for the defender to find the new malware, develop a signature, and deploy that signature is relatively costly leaving the security accounting on the side of the adversary.

To win this battle, the defender must switch to a known-good model, or “deny all with exceptions.” Also known as the white list. However, as we have seen – this simply adds a new target for the adversary: the white list itself.

Powered by WordPress & Theme by Anders Norén