NWW published a great article yesterday talking about the need for 100% accurate packet capture. This blog is extracted verbatim from the article. Given that we wrote it, we think this is justifiable. Our thanks to John Dix at NWW for supporting us on it.
A new class of packet based network monitoring and recording solutions are emerging that enable companies running high-speed and ultra-high-speed networks to address the issue of network blindness, a condition that exposes organizations to a raft of operational, legal, compliance and reputational risks. With the cost of network downtime measured in millions of dollars per hour, knowing what’s going on inside the network isn’t just a nice to have, it’s critical.
Today’s 10 Gigabit networks are so complex that there’s invariably duplicate traffic from badly configured switches and routers consuming bandwidth without being noticed, resulting in everything from videoconferencing falling over to failure of critical business applications. Installing a packet-based monitoring and recording fabric enables organizations to alleviate network blindness and gain visibility into network congestion issues.
It’s clear that ultra-high-speed networking is on the horizon in many industries. In a recent survey of 100 organizations in North America, 71% said they have made the transition to 10Gbps networking. The companies that participated included tier-two telcos, online service providers, retailers, manufacturing companies, health service providers and gaming companies, all with annual revenue of at least $10 billion. In addition, 43% of the organizations surveyed said they have plans to adopt 40Gbps or 100Gbps networking.
According to the senior networking, operations and security professionals surveyed, many of their incumbent network monitoring and security vendors are unable to reliably manage higher network speeds. In fact, 47% of the respondents believe they are missing potentially significant network events due to failing or under-performing systems. Another 65% of the organizations do not record network traffic for forensic analysis of network events, and 43% percent reported experiencing “significant difficulties” investigating and remediating network events.
Other findings of note:
– 33% of organizations reported experiencing some kind of data loss in the previous 12 months.
– 39% were unable to accurately identify what was lost.
– 42% admitted to having been the victim of a cyberattack in the past 12 months.
– 67% of those victimized by an attack admitted to having serious problems investigating the attack.
There are a plethora of 10 Gbps-capable monitoring tools available, but most of them start to get a nasty case of network myopia as network speeds hit 3Gbps. What they claim to be able to do, and what they actually do, are turning out to be quite different things. The challenge they have is that they are unable to get packets off the wire fast enough to figure out what’s really going on. The interrupt rates of standard NICs overwhelm CPUs, causing packets to be dropped. Therefore, there is a need for dedicated and purpose-built packet capture hardware.
In the past, simple visibility to Layer 4 was enough, but that’s no longer the case as so many applications are now Web-based. Without visibility into the application layer, organizations can’t distinguish between Skype and SAP or Dropbox and FarmVille. With visibility into the application layer organizations can really able to start to see what’s happening and who’s responsible.
Packet-based monitoring is the only way to get really high levels of granular visibility into the network. When compared to sampled NetFlow-based tools or traditional SNMP polling, the difference in information resolution is an order of magnitude greater. It’s only by recording packet level information that organizations can go back in time to perform forensic investigations into events. But remember, the output of a packet-based tool is only ever going to be as good as the quality of the input, and without every packet, almost any analysis that you do is pretty much pointless.
Tools that accurately record network traffic (as part of an integrated security solution) have proven themselves to be an essential way of providing post-security attack forensics, including the ability to understand what data may have been lost and how. Security teams and network operation teams need to isolate packets for forensic investigation and deal with rising expectations for network uptime and performance.
In order to select technologies that will scale to meet their needs as they move to 10Gbps networks and beyond, organizations need to start asking the network monitoring and network security solution vendors a different set of questions:
• How far (up the OSI) stack can you see?
• How fast can you capture and record packets before you start losing them?
• How can I access the raw packets that have been captured?
• Do the packets leave the data center when I access them (big compliance risk)?
To be acceptable, vendors need to be able to prove their claims around network performance (the standard metric is dropped packet counts at different network speeds).
Today, a new breed of distributed packet-based network monitoring and recording fabrics are emerging to help organizations solve the problem of network optimization and real-time anomaly detection and forensic examinations. They are reducing both the amount of time it takes to investigate any given network issue (meantime to resolution) and reducing the average skill set required to do so.
One-hundred percent packet capture-based network monitoring and recording platforms provide a common network management infrastructure that allows users to chop and change between tool sets quickly and easily, depending on the issues they need to address. Packet-level recording is only going to become a bigger deal as the true impact of recent security breaches is felt and SEC legislation is imposed to force organizations to come clean. What’s your plan?