To capture or not to capture? Why that’s no longer a valid question

Posted April 10th, 2011 by Tim Nichols

In the last week both Symantec and IBM have released their annual security surveys which provide rich insight into the volume and range of security attacks circling the Internet.

As expected, Symantec Corp in their ‘Internet Security Threat Report’ highlighted a massive increase in threat volume (286 million new threats last year), accompanied by several new ‘megatrends’ in the threat landscape. The report highlights increases in both the frequency and sophistication of targeted attacks on enterprises; the continued growth of social networking sites as an attack distribution platform; and a change in attackers’ infection tactics. In addition, the report explores how attackers are exhibiting a notable shift in focus toward mobile devices.

Specifically, Symantec identified attacks such as Hydraq and Stuxnet as posing a growing threat to enterprises in 2010. To increase the likelihood of successful, undetected infiltration into the enterprise, an increasing number of these targeted attacks are leveraging zero-day vulnerabilities to break into computer systems. As one example, Stuxnet alone exploited four different zero-day vulnerabilities to attack its targets. While the high-profile targeted attacks of 2010 attempted to steal intellectual property or cause physical damage, many targeted attacks preyed on individuals for their personal information. For example, the report found that data breaches caused by hacking resulted in an average of more than 260,000 identities exposed per breach in 2010, nearly quadruple that of any other cause. At Endace, we see the prevention of data loss as being a key driver of technology investment over the next 12 months as organisations start to really understand the reputational damage caused when private customer data goes public.

Along very similar lines, IBM has released results from its annual X-Force 2010 Trend and Risk Report, highlighting that public and private organizations around the world faced increasingly sophisticated, customized IT security threats in 2010. Based on the intelligence gathered through research of public vulnerability disclosures, and the monitoring and analysis of more than 150,000 security events per second during every day of 2010. The report documented more than 8,000 new vulnerabilities were documented, a 27 percent rise from 2009. Public exploit releases were also up 21 percent from 2009 to 2010. This data points to an expanding threat landscape in which sophisticated attacks are being launched against increasingly complex computing environments. While overall there were significantly fewer phishing attacks relative to previous years, “spear phishing,” a more targeted attack technique, grew in importance in 2010. This further indicates that cyber criminals have become more focused on quality of attacks, rather than quantity.

For us, there is an obvious take-out from all this analysis, and that’s an absolute need for full packet capture – from the edge to the core of the network. Put simply:

  • Without full packet inspection (in IDS systems and other such network security systems), there’s no way that organisations can expect to identify and stop these attacks on their way in to an organisation
  • Without full packet capture (for the purpose of forensics), there’s no way that organisations can expect to work out what really happened and stop it happening again.

Mike Rothman from Securosis put this need into context beautifully last week when he wrote a short piece about the recent RSA APT attack:

Obviously everyone remains all wrapped in with the details (or lack thereof) of the RSA breach. The RSA folks started talking a bit about the attack and their response. Then Gartner’s Avivah Litan said RSA should have known better. What? Analyst mediocrity makes me sad. There is a clear disconnect between the attack that happened and the technology she believes RSA should have used to stop it. How could algorithms for risk-based authentication and consumer fraud detection in web-based applications have stopped an employee from opening a spreadsheet and subsequently getting pwned by malware? Yes, in hindsight, RSA should have had full packet capture everywhere, Yes, their low-level finance administrators should have been trained to not click on things. But there is no technical control to prevent user stupidity. I guess there’s no way to prevent analyst stupidity, either. Now that is something I should have known better.