Network Security and Management
Challenges – Part 3: Agility

Original Entry by : Endace

The Need for Agile Cyberdefense – and How to Achieve it

Key Research Findings

  • 75% of organizations report significant challenges with alert fatigue and 82% report significant challenges with tool fatigue
  • 91% of respondents report significant challenges in “integrating solutions to streamline processes, increase productivity and reduce complexity”.
  • Investigations are often slow and resource-intensive, with 15% of issues taking longer than a day to investigate and involving four or more people in the process.

In part two of this series of blog posts, we looked at Visibility as one of the key challenges uncovered in the research study Challenges of Managing and Securing the Network 2019.

In this third post, we’ll be discussing another of the key challenges that organizations reported: Agility

From a cybersecurity and performance management perspective, the term “Agility” can mean two different things. In one sense it can mean the ability to investigate and respond quickly to cyber threats or performance issues. But it can also refer to the ability to rapidly deploy new or upgraded solutions in order to evolve the organization’s ability to defend against, or detect, new security threats or performance issues. 

To keep things clear let’s refer to these two different meanings for agility as “Agile Response” and “Agile Deployment.”

Enabling Agile Response

In the last post, we looked at the data sources organizations can use to improve their visibility into network activity – namely using network metadata, combined with full packet data, to provide the definitive evidence that enables analysts to quickly and conclusively investigate issues. 

In order to leverage this data, the next step is to make it readily available to the tools and teams that need access to it. Tools can access the data to more accurately detect issues, and teams get quick and easy access to the definitive evidence they need to investigate and resolve issues faster and more effectively. 

Organizations report that they are struggling with two significant issues when it comes to investigating and resolving security or performance issues. 

The first is they are drowning in the sheer volume of alerts being reported by their monitoring tools. Investigating each issue is a cumbersome and resource-intensive process, often involving multiple people. As a result there is typically a backlog of issues that never get looked at – representing an unknown level of risk to the organization.

The second issue, which is compounding the alert fatigue problem, is that the tools teams use are not well-integrated, making the investigation process slow and inefficient.  In fact, 91% of the organizations surveyed reported significant challenges in “integrating solutions to streamline processes, increase productivity and reduce complexity.” The result is analysts are forced to switch from tool to tool (also known as “swivel chair integration”) to try and piece together a “big-picture” view of what happened.

Integrating network metadata and packet data into security and performance monitoring tools is a way to overcome both these challenges:

  • It gives teams access to a shared, authoritative source of truth about network activity. Analysts can pivot from an alert, or a metadata query, directly to the related packets for conclusive verification of what took place. This simplifies and accelerates investigations, making teams dramatically more productive and eliminating alert fatigue.
  • It enables a standardized investigation process. Regardless of the tool an analyst is using, they can get directly from an alert or query to the forensic detail – the packets – in the same way every time. 
  • It enables data from multiple sources to be correlated more easily. This is typically what teams are looking to achieve through tighter tool integration. Network data provides the “glue” (IP addresses, ports, time, application information etc.) that enables data from other diverse sources (log files, SNMP alerts etc.) to be correlated more easily. 

By leveraging a common, authoritative source of packet-level evidence organizations can create a “community of interoperability” across all their security and performance monitoring tools that drives faster response and greater productivity.

By integrating this packet-level network history with their security tools, SecOps teams can pivot quickly from alerts to concrete evidence, reducing investigation times from hours or days to just minutes.

Endace’s EndaceProbe Analytics Platform does this by enabling solutions from leading security and performance analytics vendors – such as BluVector, Cisco, Darktrace, Dynatrace, Micro Focus, IBM, Ixia, Palo Alto Networks, Splunk and others – to be integrated with and/or hosted on the EndaceProbe platform. Hosted solutions can access analyze live packet data for real-time detection or analyze recorded data for back-in-time investigations. 

The EndaceProbe’s powerful API-based integration allows analysts to go from alerts in any of these tools directly to the related packet history for deep, contextual analysis with a single click. 

The Road to Agile Deployment

The research showed that many organizations report their lack of visibility is due to having “too few tools in too few places in the network.” There are two reasons for this. One is economic – and we’ll look at that in the next post. The other is that the process of selecting and deploying new security and performance monitoring solutions is very slow.

The reason deploying new solutions is so slow is that they are typically deployed as hardware-based appliances. And as we all know, the process of acquiring budget for, evaluating, selecting, purchasing and deploying hardware can take months. Moreover, appliance-based solutions are prone to obsolescence and are difficult or impossible to upgrade without complete replacement. 

All these things make for an environment that is static and slow-moving: precisely the opposite of what organizations need when seeking to be agile and evolve their infrastructure quickly to meet new needs. Teams cannot evolve systems quickly enough to meet changing needs – which is particularly problematic when it comes to security, because the threat landscape changes so rapidly. As a result, many organizations are left with security solutions that are past their use-by date but can’t be replaced until their CAPEX value has been written down.

The crux of the problem is that many analytics solutions rely on collecting and analyzing network data – which means every solution typically includes its own packet capture hardware. 

Unlike the datacenter, where server virtualization has delivered highly efficient resource utilization, agile deployment and significant cost savings, there isn’t – or rather hasn’t been until now – a common hardware platform that enables network security and performance analytics solutions to be virtualized in the same way. A standardized platform for these solutions needs to include the specialized, dedicated hardware necessary for reliable packet capture and recording at high speed.

This is why Endace designed the EndaceProbe™ Analytics Platform. Multiple EndaceProbes can be deployed across the network to provide a common hardware platform for recording full packet data while simultaneously hosting security and performance analytics tools that need to analyze packet data. 

Adopting a common hardware platform removes the hardware dependence that currently forces organizations to deploy multiple hardware appliances from multiple vendors and frees them up to deploy analytics solutions as virtualized software applications. This enables agile deployment and gives organizations the freedom to choose the security, application performance and network performance solutions that best suit their needs, independent of the underlying hardware.

In the next post, we’ll look at how a common platform can help address some of the economic challenges that organizations face in protecting their networks. 


Network Security and
Management Challenges – Part 2: Visibility

Original Entry by : Endace

Stop Flying Blind: How to ensure Network Visibility

Network Visibility Essential to Network Security

Key Research Findings

  • 89% of organizations lack sufficient visibility into network activity certain about what is happening.
  • 88% of organizations are concerned about their ability to resolve security and performance problems quickly and accurately.

As outlined in the first post in this series, lack of visibility into network activity was one of the key challenges reported by organizations surveyed by VIB for the Challenges of Managing and Securing the Network 2019 research study. This wasn’t a huge surprise: we know all too well that a fundamental prerequisite for successfully protecting networks and applications is sufficient visibility into network activity. 

Sufficient visibility means being able to accurately monitor end-to-end activity across the entire network, and recording reliable evidence of this activity that allows SecOps, NetOps and DevOps teams to react quickly and confidently to any detected threats or performance issues. 

Context is Key

It might be tempting to suggest that lack of network visibility results from not collecting enough data. Actually, the problem is not possessing enough of the right data to provide the context that enables a coherent big-picture view of activity – and insufficient detail to enable accurate event reconstruction. This leaves organizations questioning their ability to adequately protect their networks.

Without context, data is just noise. Data tends to be siloed by department. What is visible to NetOps may not be visible to SecOps, and vice versa. It is often siloed inside specific tools too, forcing analysts to correlate data from multiple sources to investigate issues because they lack an independent and authoritative source of truth about network activity. 

Typically, organizations rely on data sources such as log files, and network metadata, which lack the detailed data necessary for definitive event reconstruction. For instance, while network metadata might show that a host on the network communicated with a suspect external host, it won’t give you the full details about what was transferred. For that, you need full packet data. 

In addition, network metadata and packet data are the only data sources that are immune to potential compromise. Log files and other data sources can be tampered with by cyber attackers to hide evidence of their presence and activity; or may simply not record the vital clues necessary to investigate a threat or issue.

Combining Network Metadata with Full Packet Data for 100% Visibility

The best possible solution to improving visibility is a combination of full packet data and rich network metadata. Metadata gives the big picture view of network activity and provides an index that allows teams to quickly locate relevant full packet data. Full packet data contains the “payload” that lets teams reconstruct, with certainty, what took place.

Collecting both types of data gives NetOps, DevOps and SecOps teams the information they need to quickly investigate threats or performance problems coupled with the ability to see precisely what happened so they know how to respond with confidence.

This combination provides the context needed to deliver both a holistic picture of network activity and the detailed granular data required to give certainty. It also provides an independent, authoritative source of network truth that makes it easy to correlate data from multiple sources – such as log files – and validate their accuracy.

With the right evidence at hand, teams can respond more quickly and accurately when events occur. 

In the next post in this series, we’ll look at how to make this evidence easily accessible to the teams and tools that need it – and how this can help organizations be more agile in responding to security threats and performance issues.


Introducing the Network Security and
Management Challenges Blog Series

Original Entry by : Endace

Recent research provides insight into overcoming the challenges of managing and securing the network

Network Security and Performance Management Research

A Big Thank-You

We’d like to take this opportunity to thank all of the companies and individuals that participated in both studies. Without your participation, it would not have been possible to produce these reports and the valuable insight they contain.

For those who didn’t get a chance to participate, please click here to register your interest in participating in our 2020 research projects.

Last year, Endace participated in two global research studies focusing on the challenges of protecting enterprise networks. The results of both provide powerful insights into the state of network security today, and what organizations can do to improve the security and reliability of their networks. In this series of blog posts, we’re going to take a deep dive into the results and their implications. 

We commissioned an independent, US-based research company, Virtual Intelligence Briefing (VIB) to conduct the research underpinning the Challenges of Managing and Securing the Network 2019 report. VIB surveyed senior executives and technical staff at more than 250 large, global enterprises to understand the challenges they face in protecting against cyberattacks threats and preventing network and application performance issues. 

Organizations from a range of industry verticals including Finance, Healthcare, Insurance and Retail participated. Annual revenues of participating companies were between $250M and $5B+, and respondents included senior executives such as CIOs and CISO, as well as technical management and technical roles. 

Our second research project was with Enterprise Management Associates (EMA) and was focused on looking at what leading organizations are doing to improve their cybersecurity and what tactical choices are making the biggest difference. This research was based on responses to a detailed survey of more than 250 large enterprises across a wide range of industries .

You can download a summary of EMA’s report here: “Unlocking High Fidelity Security 2019“.

So what did we find out? 

When it comes to securing their networks from cyberattacks, organizations find it hard to ‘see’ all the threats, making detection and resolution of security and performance issues cumbersome and often inconclusive. They lack sufficient visibility into network activity, with too few tools in too few places to be confident they can quickly and effectively respond to cyber threats and performance issues.

The need for greater agility was also a common challenge, with alert fatigue, tool fatigue and lack of integration between tools making the investigation and resolution process slow and resource-intensive. 

Organizations also face significant economic challenges in the way they are currently forced to purchase and deploy solutions. This leaves them unable to evolve quickly enough to meet the demands imposed by today’s fast-moving threat landscape and 24×7 network and application uptime requirements. 

In this series, we’ll explore each of these three challenges – Visibility, Agility and Economics – while also looking at how they are intrinsically inter-related. Understanding and addressing all of these challenges together revolutionizes network security and management, and enables organizations to realize greater efficiency while saving money.

Our next post will look at why organizations lack visibility into network activity and how they can overcome this challenge.