Network Security and Management Challenges Blog Series – Part 4

Original Entry by : Endace

Driving Economic Efficiency in Cyber Defense

Key Research Findings

  • Available budget, freedom to choose the best solutions and platform fatigue are all impacting on the ability of system architects to design and deploy the best solutions to meet the organization’s needs.
  • 78% of system architects reported platform fatigue is a significant challenge with 29% rating the level of challenge as high.
  • More than 90% of respondents reported that the process of acquiring and deploying security, network or application performance platforms is challenging, with almost half reporting that it is either extremely or very challenging.

Most of what’s written about cybersecurity focuses on the mechanics of attacks and defense. But, as recent research shows, the economics of security is just as significant. It’s not just lack of available budget – departments always complain about that – but how they are forced to allocate their budgets.

Currently, security solutions are often hardware-based, which forces organizations into making multiple CAPEX investments – with accompanying complex, slow purchase processes.

More than three-quarters of respondents to the survey reported that “the challenge of constraints caused by CAPEX cycle (e.g. an inability to choose best possible solutions when the need arises) is significant.”Almost half reported being stuck with solutions that have “outlived their usefulness, locked into particular vendors or unable to choose best-of-breed solutions.

Speed of deployment is also a significant challenge for organizations, with more than 50% of respondents reporting that “deploying a new security, network or application performance platform takes six to twelve months or longer.” 

As outlined in the previous post, existing security solutions are expensive, inflexible, hardware-dependent and take too long to deploy or upgrade. The process of identifying a need, raising budget, testing, selecting and deploying hardware-based security and performance monitoring solutions simply takes too long. And the cost is too high.

Contrast this with cyber attackers, who don’t require costly hardware to launch their attacks. They are not hampered by having to negotiate slow, complex purchase and deployment cycles. And often they leverage their target’s own infrastructure for attacks. The truth is that the economics of cybersecurity is broken: with the balance radically favoring attackers at the expense of their victims.

Reshaping the economics of cyberdefense

Companies have a myriad of choices when it comes to possible security, network performance and application performance monitoring solutions. Typically, they deploy many different tools to meet their specific needs. 

As discussed in the previous post, the lack of a common hardware architecture for analytics tools has prevented organizations from achieving the same cost savings and agility in their network security and monitoring infrastructure that virtualizaton has enabled in other areas of their IT infrastructure. As a result, budgets are stretched, organizations don’t have the coverage they’d like (leading to blindspots in network visibility) and deploying and managing network security and performance monitoring tools is slow, cumbersome and expensive.

Consolidating tools onto a common hardware platform – such as our EndaceProbe – helps organizations overcome many of the economic challenges they face:

  • It lets them reduce their hardware expenditure, resulting in significant CAPEX and OPEX savings. 
  • Reduced hardware expenditure frees up budget that can be directed towards deploying more tools in more places on the network – to remove visibility blind spots – and deploying tools the company needs but couldn’t previously afford.
  • Teams gain the freedom to choose what tools they adopt without being locked into “single-stack” vendor solutions. 
  • Teams can update or replace security and performance monitoring functions by deploying software applications on the existing hardware platform without a rip-and-replace. This significantly reduces cost and enables much faster, more agile deployment.

The cost of the hardware infrastructure needed to protect and manage the networks can also be shared by SecOps, NetOps, DevOps and IT teams, further reducing OPEX and CAPEX costs and facilitating closer cooperation and collaboration between teams.

For architects, a common hardware platform becomes a network element that can be designed into the standard network blueprint – reducing complexity and ensuring visibility across the entire network. And for IT teams responsible for managing the infrastructure it avoids the platform fatigue that currently results from having to manage multiple different hardware appliances from multiple different vendors.

Because analytics functionality is abstracted from the underlying EndaceProbe hardware, that functionality can be changed or upgraded easily, enabling – as we saw in the last post – far more agile deployment and the freedom to deploy analytics tools that best meet the company’s needs rather than being locked into specific vendors’ offerings.

Equally importantly, it extends the useful life of the EndaceProbe hardware too. No longer does hardware have to be replaced in order to upgrade or change analytics functionality. And as network speeds and loads increase, older EndaceProbes can be redeployed to edge locations and replaced at the network core with newer models offering higher-speeds and greater storage density. This ensures companies get maximum return on their hardware investment.

Lastly, their modular architecture allows multiple, physical EndaceProbes to be stacked or grouped to form centrally-managed logical EndaceProbes capable of scaling to network speeds of hundreds of gigabits-per-second and storing petabytes of network history.

A Final Word

This blog series has looked at the three key challenges – Visibility, Agility and Economic Efficiency (this post) – that enterprises report they face in protecting their networks and applications from cyber threats and costly performance issues. These challenges are interrelated: it is only by addressing all three that organizations can achieve the level of confidence and certainty necessary to effectively protect their critical assets.


Endace Honored with Ten Accolades in
Security Industry Awards Sweep

Original Entry by : Endace

Last month was officially our most successful month for awards ever! Maybe it’s got something to do with it being February in a Leap Year?

Whatever the reason, we’re thrilled to report that Endace received no less than ten industry awards last week, winning three top spots at the Cyber Defense Magazine InfoSec Awards and a further seven awards at the Info Security Product Guide Global Excellence Awards.


Endace's Cary Wright (left) and Michael Morris (right) accept the award for "Best Product, Packet Capture Platform" from Cyber Defense Magazine
Endace’s Cary Wright (left) and Michael Morris (right) accept the award for “Best Product, Packet Capture Platform” from Cyber Defense Magazine

CYBER DEFENSE MAGAZINE (CDM) is the industry’s leading electronic information security magazine. Rolling out the red carpet at RSA in San Francisco this week, CDM’s panel of judges voted the EndaceProbe Analytics Platform Product Suite best in class for the following categories:

  • Most Innovative, Network Security and Management
  • Best Product, Packet Capture Platform
  • Hot Company, Security Investigation Platform

INFO SECURITY PRODUCTS GUIDE is the industry’s leading information security research and advisory guide. The awards panel, which includes 35 judges from around the world, recognized Endace in the following categories

  • Grand Trophy Winner
  • Best Security Hardware (Gold): EndaceProbe Analytics Platform Product Suite
  • Most Innovative Security Hardware of the Year (Gold): EndaceProbe Analytics Platform Product Suite and Fusion Partner Program
  • Network Security and Management (Gold): EndaceProbe Analytics Platform with EndaceVision
  • Critical Infrastructure Security (Gold): EndaceProbe Analytics Platform Product Suite
  • Best Security Solution (Silver): EndaceProbe Analytics Platform Product Suite and Fusion Partner Program
  • Network Visibility, Security & Testing (Silver): EndaceProbe Analytics Platform with EndaceVision

We’re looking forward to attending the Info Security Product Guide Awards 2020 presentation ceremony and dinner in October to celebrate the Grand Trophy win and the six other awards.


We’d like to extend a big thank you to the judging panels for both award programs. And congratulations to fellow 2020 winners including Endace Fusion Partners Cisco, Ixia (a Keysight company), Darktrace and Gigamon.


Network Security and Management
Challenges – Part 3: Agility

Original Entry by : Endace

The Need for Agile Cyberdefense – and How to Achieve it

Key Research Findings

  • 75% of organizations report significant challenges with alert fatigue and 82% report significant challenges with tool fatigue
  • 91% of respondents report significant challenges in “integrating solutions to streamline processes, increase productivity and reduce complexity”.
  • Investigations are often slow and resource-intensive, with 15% of issues taking longer than a day to investigate and involving four or more people in the process.

In part two of this series of blog posts, we looked at Visibility as one of the key challenges uncovered in the research study Challenges of Managing and Securing the Network 2019.

In this third post, we’ll be discussing another of the key challenges that organizations reported: Agility

From a cybersecurity and performance management perspective, the term “Agility” can mean two different things. In one sense it can mean the ability to investigate and respond quickly to cyber threats or performance issues. But it can also refer to the ability to rapidly deploy new or upgraded solutions in order to evolve the organization’s ability to defend against, or detect, new security threats or performance issues. 

To keep things clear let’s refer to these two different meanings for agility as “Agile Response” and “Agile Deployment.”

Enabling Agile Response

In the last post, we looked at the data sources organizations can use to improve their visibility into network activity – namely using network metadata, combined with full packet data, to provide the definitive evidence that enables analysts to quickly and conclusively investigate issues. 

In order to leverage this data, the next step is to make it readily available to the tools and teams that need access to it. Tools can access the data to more accurately detect issues, and teams get quick and easy access to the definitive evidence they need to investigate and resolve issues faster and more effectively. 

Organizations report that they are struggling with two significant issues when it comes to investigating and resolving security or performance issues. 

The first is they are drowning in the sheer volume of alerts being reported by their monitoring tools. Investigating each issue is a cumbersome and resource-intensive process, often involving multiple people. As a result there is typically a backlog of issues that never get looked at – representing an unknown level of risk to the organization.

The second issue, which is compounding the alert fatigue problem, is that the tools teams use are not well-integrated, making the investigation process slow and inefficient.  In fact, 91% of the organizations surveyed reported significant challenges in “integrating solutions to streamline processes, increase productivity and reduce complexity.” The result is analysts are forced to switch from tool to tool (also known as “swivel chair integration”) to try and piece together a “big-picture” view of what happened.

Integrating network metadata and packet data into security and performance monitoring tools is a way to overcome both these challenges:

  • It gives teams access to a shared, authoritative source of truth about network activity. Analysts can pivot from an alert, or a metadata query, directly to the related packets for conclusive verification of what took place. This simplifies and accelerates investigations, making teams dramatically more productive and eliminating alert fatigue.
  • It enables a standardized investigation process. Regardless of the tool an analyst is using, they can get directly from an alert or query to the forensic detail – the packets – in the same way every time. 
  • It enables data from multiple sources to be correlated more easily. This is typically what teams are looking to achieve through tighter tool integration. Network data provides the “glue” (IP addresses, ports, time, application information etc.) that enables data from other diverse sources (log files, SNMP alerts etc.) to be correlated more easily. 

By leveraging a common, authoritative source of packet-level evidence organizations can create a “community of interoperability” across all their security and performance monitoring tools that drives faster response and greater productivity.

By integrating this packet-level network history with their security tools, SecOps teams can pivot quickly from alerts to concrete evidence, reducing investigation times from hours or days to just minutes.

Endace’s EndaceProbe Analytics Platform does this by enabling solutions from leading security and performance analytics vendors – such as BluVector, Cisco, Darktrace, Dynatrace, Micro Focus, IBM, Ixia, Palo Alto Networks, Splunk and others – to beintegrated withand/or  hosted on the EndaceProbe platform. Hosted solutions can access analyze live packet data for real-time detection or analyze recorded data for back-in-time investigations. 

The EndaceProbe’s powerful API-based integration allows analysts to go from alerts in any of these tools directly to the related packet history for deep, contextual analysis with a single click. 

The Road to Agile Deployment

The research showed that many organizations report their lack of visibility is due to having “too few tools in too few places in the network.” There are two reasons for this. One is economic – and we’ll look at that in the next post. The other is that the process of selecting and deploying new security and performance monitoring solutions is very slow.

The reason deploying new solutions is so slow is that they are typically deployed as hardware-based appliances. And as we all know, the process of acquiring budget for, evaluating, selecting, purchasing and deploying hardware can take months. Moreover, appliance-based solutions are prone to obsolescence and are difficult or impossible to upgrade without complete replacement. 

All these things make for an environment that is static and slow-moving: precisely the opposite of what organizations need when seeking to be agile and evolve their infrastructure quickly to meet new needs. Teams cannot evolve systems quickly enough to meet changing needs – which is particularly problematic when it comes to security, because the threat landscape changes so rapidly. As a result, many organizations are left with security solutions that are past their use-by date but can’t be replaced until their CAPEX value has been written down.

The crux of the problem is that many analytics solutions rely on collecting and analyzing network data – which means every solution typically includes its own packet capture hardware. 

Unlike the datacenter, where server virtualization has delivered highly efficient resource utilization, agile deployment and significant cost savings, there isn’t – or rather hasn’t been until now – a common hardware platform that enables network security and performance analytics solutions to be virtualized in the same way. A standardized platform for these solutions needs to include the specialized, dedicated hardware necessary for reliable packet capture and recording at high speed.

This is why Endace designed the EndaceProbe™ Analytics Platform. Multiple EndaceProbes can be deployed across the network to provide a common hardware platform for recording full packet data while simultaneously hosting security and performance analytics tools that need to analyze packet data. 

Adopting a common hardware platform removes the hardware dependence that currently forces organizations to deploy multiple hardware appliances from multiple vendors and frees them up to deploy analytics solutions as virtualized software applications. This enables agile deployment and gives organizations the freedom to choose the security, application performance and network performance solutions that best suit their needs, independent of the underlying hardware.

In the next post, we’ll look at how a common platform can help address some of the economic challenges that organizations face in protecting their networks. 


Network Security and
Management Challenges – Part 2: Visibility

Original Entry by : Endace

Stop Flying Blind: How to ensure Network Visibility

Network Visibility Essential to Network Security

Key Research Findings

  • 89% of organizations lack sufficient visibility into network activity certain about what is happening.
  • 88% of organizations are concerned about their ability to resolve security and performance problems quickly and accurately.

As outlined in the first post in this series, lack of visibility into network activity was one of the key challenges reported by organizations surveyed by VIB for the Challenges of Managing and Securing the Network 2019 research study. This wasn’t a huge surprise: we know all too well that a fundamental prerequisite for successfully protecting networks and applications is sufficient visibility into network activity. 

Sufficient visibility means being able to accurately monitor end-to-end activity across the entire network, and recording reliable evidence of this activity that allows SecOps, NetOps and DevOps teams to react quickly and confidently to any detected threats or performance issues. 

Context is Key

It might be tempting to suggest that lack of network visibility results from not collecting enough data. Actually, the problem is not possessing enough of the right data to provide the context that enables a coherent big-picture view of activity – and insufficient detail to enable accurate event reconstruction. This leaves organizations questioning their ability to adequately protect their networks.

Without context, data is just noise. Data tends to be siloed by department. What is visible to NetOps may not be visible to SecOps, and vice versa. It is often siloed inside specific tools too, forcing analysts to correlate data from multiple sources to investigate issues because they lack an independent and authoritative source of truth about network activity. 

Typically, organizations rely on data sources such as log files, and network metadata, which lack the detailed data necessary for definitive event reconstruction. For instance, while network metadata might show that a host on the network communicated with a suspect external host, it won’t give you the full details about what was transferred. For that, you need full packet data. 

In addition, network metadata and packet data are the only data sources that are immune to potential compromise. Log files and other data sources can be tampered with by cyber attackers to hide evidence of their presence and activity; or may simply not record the vital clues necessary to investigate a threat or issue.

Combining Network Metadata with Full Packet Data for 100% Visibility

The best possible solution to improving visibility is a combination of full packet data and rich network metadata. Metadata gives the big picture view of network activity and provides an index that allows teams to quickly locate relevant full packet data. Full packet data contains the “payload” that lets teams reconstruct, with certainty, what took place.

Collecting both types of data gives NetOps, DevOps and SecOps teams the information they need to quickly investigate threats or performance problems coupled with the ability to see precisely what happened so they know how to respond with confidence.

This combination provides the context needed to deliver both a holistic picture of network activity and the detailed granular data required to give certainty. It also provides an independent, authoritative source of network truth that makes it easy to correlate data from multiple sources – such as log files – and validate their accuracy.

With the right evidence at hand, teams can respond more quickly and accurately when events occur. 

In the next post in this series, we’ll look at how to make this evidence easily accessible to the teams and tools that need it – and how this can help organizations be more agile in responding to security threats and performance issues.


Introducing the Network Security and
Management Challenges Blog Series

Original Entry by : Endace

Recent research provides insight into overcoming the challenges of managing and securing the network

Network Security and Performance Management Research

A Big Thank-You

We’d like to take this opportunity to thank all of the companies and individuals that participated in both studies. Without your participation, it would not have been possible to produce these reports and the valuable insight they contain.

For those who didn’t get a chance to participate, please click here to register your interest in participating in our 2020 research projects.

Last year, Endace participated in two global research studies focusing on the challenges of protecting enterprise networks. The results of both provide powerful insights into the state of network security today, and what organizations can do to improve the security and reliability of their networks. In this series of blog posts, we’re going to take a deep dive into the results and their implications. 

We commissioned an independent, US-based research company, Virtual Intelligence Briefing (VIB) to conduct the research underpinning the Challenges of Managing and Securing the Network 2019 report. VIB surveyed senior executives and technical staff at more than 250 large, global enterprises to understand the challenges they face in protecting against cyberattacks threats and preventing network and application performance issues. 

Organizations from a range of industry verticals including Finance, Healthcare, Insurance and Retail participated. Annual revenues of participating companies were between $250M and $5B+, and respondents included senior executives such as CIOs and CISO, as well as technical management and technical roles. 

Our second research project was with Enterprise Management Associates (EMA) and was focused on looking at what leading organizations are doing to improve their cybersecurity and what tactical choices are making the biggest difference. This research was based on responses to a detailed survey of more than 250 large enterprises across a wide range of industries .

You can download a summary of EMA’s report here: “Unlocking High Fidelity Security 2019“.

So what did we find out? 

When it comes to securing their networks from cyberattacks, organizations find it hard to ‘see’ all the threats, making detection and resolution of security and performance issues cumbersome and often inconclusive. They lack sufficient visibility into network activity, with too few tools in too few places to be confident they can quickly and effectively respond to cyber threats and performance issues.

The need for greater agility was also a common challenge, with alert fatigue, tool fatigue and lack of integration between tools making the investigation and resolution process slow and resource-intensive. 

Organizations also face significant economic challenges in the way they are currently forced to purchase and deploy solutions. This leaves them unable to evolve quickly enough to meet the demands imposed by today’s fast-moving threat landscape and 24×7 network and application uptime requirements. 

In this series, we’ll explore each of these three challenges – Visibility, Agility and Economics – while also looking at how they are intrinsically inter-related. Understanding and addressing all of these challenges together revolutionizes network security and management, and enables organizations to realize greater efficiency while saving money.

Our next post will look at why organizations lack visibility into network activity and how they can overcome this challenge.


Packet Detectives Episode 1: The Case of the Retransmissions

Original Entry by : Endace

Demystifying Network Investigations with Packet Data

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

As I talk to security analysts, network operations engineers and applications teams around the world a common theme regularly emerges: that troubleshooting security or performance issues with log or flow data alone just doesn’t cut it.

Most folks report spending way too many hours troubleshooting problems only to realize they just don’t have enough detail to know exactly what happened. Often this results in more finger pointing and unresolved issues. Too much time spent investigating issues also causes other alerts to start piling up, resulting in stress and undue risk to the organisation from a backlog of alerts that never get looked at.

On the other hand, those that use full packet capture data to troubleshoot problems report significantly faster resolution times and greater confidence because they can see exactly what happened on the wire.

Many folks I talk to also say they don’t have the expertise necessary to troubleshoot issues using packet data. But it’s actually much easier than you might expect. Packet decode tools – like Wireshark – are powerful and quite self-explanatory. And there’s tons of resources available on the web to help you out. You don’t need to be a mystical, networking guru to gain valuable insights from packet data!

Getting to the relevant packets is quick and easy too thanks to the EndaceProbe platform’s integration with solutions from our Fusion Partners like Cisco, IBM, Palo Alto Networks, Splunk and many others. Analysts can quickly pivot from alerts in any of those tools directly to related packet data with a single click, gaining valuable insights into their problems quickly and confidently.

To help further, we thought it would be useful to kick-off a video series of “real-world” investigation scenarios to show just how easily packet data can be used to investigate and resolve difficult issues (security or performance-related) in your network.

So here’s the first video in what we hope to make a regular series. Watch as industry-renowned SharkFest presenter and all-round Wireshark guru, Betty Dubois, walks us through investigating an application slow-down that is problems for users. The truth is in the packets …

We hope you find this video useful. Please let us know if you have ideas for other examples you’d like to see.


The Importance of Network Data to Threat Hunting (Part 3)

Original Entry by : Robert Salier

Frameworks and Regulations

By Robert Salier, Product Manager, Endace


Robert Salier, Product Manager, EndaceIn this, the third article in our series on threat hunting (see here for Part 1 and Part 2), we explore the frameworks and regulations most relevant to threat hunting.

These tend to fall into two categories: those that address cybersecurity at a governance level, and those that facilitate insight into individual attacks and help formulate appropriate defense actions.

Governance Level Frameworks and Regulations

The regulatory environment influences threat hunting, and cyber defense in general. In many countries, regulations impose obligations on disclosure of breaches, including what information must be provided, when, and to which stakeholders. This influences the information that an organization needs to know about a breach, and hence its choice of strategies, policies, processes and tools. These regulations generally require companies to disclose a breach to all customers that have been affected. However if an organization cannot ascertain which customers were affected, or even if any customers were affected, then they may need to contact every customer. The only thing worse than having to disclose a breach is having to disclose a breach without being able to provide the details your customers expect you to know.

There are a also a number of frameworks addressing cybersecurity at the governance level, which in some cases overlap with regulations, dealing with many of the same issues and considerations. Collectively, these frameworks and regulations help to ensure organizations implement good strategies, policies, processes and tools, e.g. …

  • Which systems and data is most important to the organization
  • What Information security policies should be in place
  • How cybersecurity should be operationalized (e.g. what organizational structure, security architecture and systems are most appropriate for the organization)
  • Incident management processes
  • Best practice guidelines

Prevalent frameworks and regulations include…

  • ISO 27000 Series of Information Security Standards
    A comprehensive family of standards for information security management, providing a set of best practices for information security management. Maintained by the International Standards Organization, it has been broadly adopted around the globe.
  • NIST Special Publication 800-53
    A catalogue of security and privacy controls for all U.S. federal organizations except those related to national security.
  • NIST Cybersecurity Framework
    A policy framework for private sector organizations to assess and improve their ability to prevent, detect, and respond to cyber attacks. It was developed for the USA, but has been adopted in a number of countries.
Frameworks to Characterize Attacks and Facilitate Responses

A number of frameworks have been developed to help describe and characterize attacker activity, and ultimately facilitate defense strategies and tactics.

Prevalent frameworks include…

  • Cyber Kill Chain
    Developed by Lockheed Martin, this framework was developed from a “kill chain” framework developed for military attack and defense. It decomposes a cyber attack into seven generic stages, providing a framework for characterising and responding to attacks. Refer to this Dark Reading article for some discussion on the benefits and limitations of this framework.
  • Diamond Model
    This model describes attacks decomposing an attack into four key aspects, i.e. details of the adversary, their capabilities, the infrastructure they used, and the victim(s). Multiple attack diamonds can be plotted graphically in various ways including timelines and groupings, facilitating deeper insight.
  • Mitre Att&ck
    Developed by Mitre, Att&ck stands for “Adversarial Tactics, Techniques, and Common Knowledge”. It is essentially a living, growing knowledge base capturing intelligence gained from millions of attacks on enterprise networks. It consists of a framework that decomposes a cyber attack into eleven different phases, a list of techniques used in each phase by adversaries, documented real-world use of each technique, and a list of known threat actor groups. Att&ck is becoming increasingly popular, used by and contributed to by many security vendors and consultants.
  • OODA Loop
    Describes a process cycle of “Observe – Orient – Decide – Act”. Originally developed for military combat operations, it is now being applied to commercial operations.

The Importance of Network Data to Threat Hunting (Part 2)

Original Entry by : Robert Salier

Threat Hunting in Practice

By Robert Salier, Product Manager, Endace


Robert Salier, Product Manager, EndaceHunting for security threats involves looking for traces of attackers in an organization’s IT environment, both past and present. It involves creativity combined with (relatively loose) methodologies and frameworks, focused on outsmarting an attacker.

Threat Hunting relies on a deep knowledge of the Tactics, Techniques and Procedures (TTP’s) that adversaries use, and a thorough knowledge of the organization’s IT environment. Well executed threat hunts provide organizations with deeper insight into their IT environment and into where attackers might hide.

This, the second article in our series of blog posts on threat hunting (read Part 1 here), looks at how leading organizations approach threat hunting, and the various data, resources, systems, and processes required to threat hunt effectively and efficiently.

Larger organizations tend to have higher public profiles, more valuable information assets, and complex and distributed environments that present a greater number of opportunities for criminals to infiltrate, hide, and perform reconnaissance without detection. When it comes to seeking out best practice, it’s not surprising that large organizations are the place to look.

Large organizations recognize that criminals are constantly looking for ways to break in undetected and that it is only a matter of time before they succeed if they haven’t already. While organizations of all sizes are being attacked, larger organizations are the leaders in this proactive approach to hunting down intruders, i.e. “threat hunting”. They have recognized that active threat hunting increases detection rates over-relying on incident detection alone – i.e. waiting for alerts from automated intrusion detection systems that may never come.

Best practice involves formulating a hypothesis about what may be occurring, then seeking to confirm it. There are three general categories of hypothesis:

  • Driven by threat intelligence from industry news, reports, and feeds.
    e.g. newsfeeds report a dramatic increase in occurences of a specific ransomware variant targeting your industry. So a threat hunt is initiated with the hypothesis that your organization is being targeted with this ransomware
  • Driven by situational awareness, i.e. focus on infrastructure, assets and data most important to the organization.
    e.g. a hypothesis that your customers’ records are the “crown jewels”, so hackers will be trying to gain access to exfiltrate this data

Having developed a hypothesis as a starting point, leading organizations rely on a range of tools and resources to threat hunt efficiently and effectively:

Historic Data from Hardware, Software and the Network
  • Infrastructure Logs from the individual components of hardware and software that form your IT environment, e.g. firewalls, IDS, switches, routers, databases, and endpoints. These logs capture notable events, alarms and other useful information, which when pieced together can provide valuable insight into historic activity in your environment. They’re like study notes that you take from a text book, i.e. highly useful, but not a full record, just a summary of what is considered notable. Also, be wary that hackers often delete or modify logs to remove evidence of their malicious activity.
  • Summarized network data (a.k.a. “packet metadata”, “network telemetry”). Traffic on network links can be captured and analysed in real time to generate a feed of summary information characterizing the network activity. The information that can be obtained goes well beyond the flow summaries that Netflow provides, e.g. by identifying and summarizing activity and anomalies up to and including layer 7 such as email header information and expired certificates. This metadata can be very useful in hunts and investigations, particularly to correlate network traffic with events and activity from infrastructure logs, and users. Also, unlike logs, packet metadata cannot be easily deleted or modified.
  • Packet level network history. By capturing and storing packets from a network link, you have a verbatim copy of the communication over that link, allowing you to see precisely what was sent and received, with zero loss of fidelity. Some equipment such as firewalls and IDS’s capture small samples of packets, but these capture just a fraction of a second of communications, and therefore must be automatically triggered by a specific alarm or event. Capturing and storing all packets (“full packet capture”, “100% packet capture”) is the only way to obtain a complete history of all communications. Historically, the barriers to full packet capture have been the cost of the required storage and the challenge of locating the packets of interest, given the sheer volume of data. However, recent advances in technology are now breaking down those barriers.
Baselines

Baselines are an understanding of what is normal and what is anomalous.
Threat hunting involves examining user, endpoint, and network activity, searching for IoA’s and IoC’s – i.e. “clues” pointing to possible intrusions and malicious activity. The challenge is knowing which activity is normal, and which is anomalous. Without knowing that, in many cases, you will not know whether certain activity is to be expected in your environment, or whether it should be investigated.

A Centralized Location for Logs and Metadata

Because there are so many disparate sources of logs, centralized collection and storage is a practical necessity for organizations with substantial IT infrastructure. Most organizations use a SIEM (Security Information and Event Manager), which may have a dedicated database for storage of logs and metadata, or may use an enterprise data lake. SIEMs can correlate data from multiple sources, support rule-based triggers, and can feature Machine Learning algorithms able to learn what activity is normal (i.e. “baselining”). Having learned what is normal, they can then identify and flag anomalous activity.

Threat Intelligence

Threat intelligence is knowledge that helps organizations protect themselves against cyber attacks. It encompasses both business level and technical level detail. At a business level this includes general trends in malicious activity, individual breaches that have occurred, and how organizations are succeeding and failing to protect themselves. At a technical level, threat intelligence provides very detailed information on how individual threats work, informing organizations how to detect, block, and remove these threats. Generally this comes in the form of articles intended for consumption by humans, but also encompasses machine-readable intelligence that can be directly ingested by automated systems, e.g. updates to threat detection rules.

Frameworks and Regulations

The regulatory environment influences threat hunting, and cyber defense in general. In many countries, regulations impose obligations on disclosure of breaches, including what information must be provided, when, and to which stakeholders. There are a also a number of frameworks addressing cyber security at the governance level, which in some cases overlap with regulations, dealing with many of the same issues and considerations. Collectively, these frameworks and regulations help to ensure organizations implement good strategies, policies, processes and tools.

In the next article in this series, we explore the frameworks and regulations that apply to threat hunting, and which ensure organizations implement appropriate strategies, policies, processes and tools.


The Importance of Network Data to Threat Hunting (Part 1)

Original Entry by : Robert Salier

Introduction to Threat Hunting

By Robert Salier, Product Manager, Endace


Robert Salier, Product Manager, EndaceCriminal hackers are stealthy. They put huge efforts into infiltrating without triggering intrusion detection systems or leaving traces in logs and metadata … and often succeed. So you need to actively go searching for them. That’s why SecOps teams are increasingly embracing threat hunting.

This is the first in a series of blog articles where we discuss various aspects of threat hunting, and how visibility into network traffic can increase the efficiency and effectiveness of threat hunting. This visibility is often the difference between detecting an intruder, or not, and collecting the conclusive evidence you need to respond to an attack, or not.

In December 2015 Ukraine suffered from a power grid cyber attack that disrupted power distribution to the nation’s citizens. Thirty substations were switched off and damaged leaving 230,000 without power.

This attack was meticulously planned and executed, with the attackers having first gained access over six months before they finally triggered the outage. There were many stages of intrusion and attack, leaving traces that were only identified in subsequent investigations. Well planned and executed threat hunting would probably have uncovered this intruder activity, and averted the serious outages that took place.

This is a good example of why, in the last few years, threat hunting has been gaining substantial momentum and focus amongst SecOps teams, with increasing efforts to better define and formalize it as a discipline. You’ll see a range of definitions with slightly different perspectives, but the following captures the essence of Threat Hunting:

The process of proactively and iteratively searching through IT infrastructure to detect and isolate advanced threats that evade existing security solutions.

There’s also some divergence in approaches to threat hunting, and in the aspects that individual organizations consider most important, but key themes are:

  • To augment automated detection, increasing the likelihood that threats will be detected.
  • To provide insight into attackers’ Tactics, Techniques and Procedures (TTP’s) and hence inform an organization where they should focus their resources and attention.
  • To identify if, and where, automated systems need updating – e.g. with new triggers.

So, threat hunting involves proactively seeking out attacks on your IT infrastructure that are not detected by automated systems such as IDS’s, firewalls, DLP and EDR solutions. It’s distinct from incident response, which is reactive. It may, however, result in an incident response being triggered.

Although threat hunting can be assisted by machine-based tools, it is fundamentally an activity performed by people, not machines, heavily leveraging human intelligence, wisdom and experience.

In the next article, we explore how leading organizations approach threat hunting, and the various data, resources, systems, and processes required to threat hunt effectively and efficiently.

In the meantime, feel free to browse the Useful References page in our Theat Hunting Section on endace.com, which contains both a glossary and useful links to various pages related to threat hunting. Below are some additional useful references.

References

(1) Threat Hunting Report (Cyber Security Insiders), p22

(2) 2018 Threat Hunting Survey Results (SANS), p13

(3) 2018 Threat Hunting Survey Results (SANS), p5

(4) Improving the Effectiveness of the Security Operations Center (Ponemon Institute), p10

(5) The Ultimate Guide To Threat Hunting, InfoSec Institute

 


Watch Endace on Cisco ThreatWise TV from RSA 2019

Original Entry by : Endace

It was a privilege to attend this year’s RSA cybersecurity event in San Francisco, and one of our top highlights was certainly the opportunity to speak to Cisco’s ThreatWise TV host Jason Wright. Watch the video on Cisco’s ThreatWise TV (or below) as Jason interviews our very own Michael Morris to learn more about how Cisco and Endace integrate to accelerate and improve cyber incident investigations.

In this short 4 minute video, Michael demonstrates how Cisco Firepower and Stealthwatch can be used together to investigate intrusion events, using Cisco dashboards and EndaceVision to drill down into events by priority and classification to show where threats come from, who has been affected and whether any lateral movement occurred, as well as conversation history and traffic profiles. Michael also explains how Cisco and Endace work together to ‘find a needle in a haystack’ across petabytes of network traffic.

A big thanks to Cisco and to Jason for giving us this spotlight opportunity. If you have any questions about how Cisco and Endace integrations can accelerate and improve cyber incident investigation, visit our Cisco partner page.