Network Security and
Management Challenges – Part 2: Visibility

Original Entry by : Endace

Stop Flying Blind: How to ensure Network Visibility

Network Visibility Essential to Network Security

Key Research Findings

  • 89% of organizations lack sufficient visibility into network activity certain about what is happening.
  • 88% of organizations are concerned about their ability to resolve security and performance problems quickly and accurately.

As outlined in the first post in this series, lack of visibility into network activity was one of the key challenges reported by organizations surveyed by VIB for the Challenges of Managing and Securing the Network 2019 research study. This wasn’t a huge surprise: we know all too well that a fundamental prerequisite for successfully protecting networks and applications is sufficient visibility into network activity. 

Sufficient visibility means being able to accurately monitor end-to-end activity across the entire network, and recording reliable evidence of this activity that allows SecOps, NetOps and DevOps teams to react quickly and confidently to any detected threats or performance issues. 

Context is Key

It might be tempting to suggest that lack of network visibility results from not collecting enough data. Actually, the problem is not possessing enough of the right data to provide the context that enables a coherent big-picture view of activity – and insufficient detail to enable accurate event reconstruction. This leaves organizations questioning their ability to adequately protect their networks.

Without context, data is just noise. Data tends to be siloed by department. What is visible to NetOps may not be visible to SecOps, and vice versa. It is often siloed inside specific tools too, forcing analysts to correlate data from multiple sources to investigate issues because they lack an independent and authoritative source of truth about network activity. 

Typically, organizations rely on data sources such as log files, and network metadata, which lack the detailed data necessary for definitive event reconstruction. For instance, while network metadata might show that a host on the network communicated with a suspect external host, it won’t give you the full details about what was transferred. For that, you need full packet data. 

In addition, network metadata and packet data are the only data sources that are immune to potential compromise. Log files and other data sources can be tampered with by cyber attackers to hide evidence of their presence and activity; or may simply not record the vital clues necessary to investigate a threat or issue.

Combining Network Metadata with Full Packet Data for 100% Visibility

The best possible solution to improving visibility is a combination of full packet data and rich network metadata. Metadata gives the big picture view of network activity and provides an index that allows teams to quickly locate relevant full packet data. Full packet data contains the “payload” that lets teams reconstruct, with certainty, what took place.

Collecting both types of data gives NetOps, DevOps and SecOps teams the information they need to quickly investigate threats or performance problems coupled with the ability to see precisely what happened so they know how to respond with confidence.

This combination provides the context needed to deliver both a holistic picture of network activity and the detailed granular data required to give certainty. It also provides an independent, authoritative source of network truth that makes it easy to correlate data from multiple sources – such as log files – and validate their accuracy.

With the right evidence at hand, teams can respond more quickly and accurately when events occur. 

In the next post in this series, we’ll look at how to make this evidence easily accessible to the teams and tools that need it – and how this can help organizations be more agile in responding to security threats and performance issues.


Introducing the Network Security and
Management Challenges Blog Series

Original Entry by : Endace

Recent research provides insight into overcoming the challenges of managing and securing the network

Network Security and Performance Management Research

A Big Thank-You

We’d like to take this opportunity to thank all of the companies and individuals that participated in both studies. Without your participation, it would not have been possible to produce these reports and the valuable insight they contain.

For those who didn’t get a chance to participate, please click here to register your interest in participating in our 2020 research projects.

Last year, Endace participated in two global research studies focusing on the challenges of protecting enterprise networks. The results of both provide powerful insights into the state of network security today, and what organizations can do to improve the security and reliability of their networks. In this series of blog posts, we’re going to take a deep dive into the results and their implications. 

We commissioned an independent, US-based research company, Virtual Intelligence Briefing (VIB) to conduct the research underpinning the Challenges of Managing and Securing the Network 2019 report. VIB surveyed senior executives and technical staff at more than 250 large, global enterprises to understand the challenges they face in protecting against cyberattacks threats and preventing network and application performance issues. 

Organizations from a range of industry verticals including Finance, Healthcare, Insurance and Retail participated. Annual revenues of participating companies were between $250M and $5B+, and respondents included senior executives such as CIOs and CISO, as well as technical management and technical roles. 

Our second research project was with Enterprise Management Associates (EMA) and was focused on looking at what leading organizations are doing to improve their cybersecurity and what tactical choices are making the biggest difference. This research was based on responses to a detailed survey of more than 250 large enterprises across a wide range of industries .

You can download a summary of EMA’s report here: “Unlocking High Fidelity Security 2019“.

So what did we find out? 

When it comes to securing their networks from cyberattacks, organizations find it hard to ‘see’ all the threats, making detection and resolution of security and performance issues cumbersome and often inconclusive. They lack sufficient visibility into network activity, with too few tools in too few places to be confident they can quickly and effectively respond to cyber threats and performance issues.

The need for greater agility was also a common challenge, with alert fatigue, tool fatigue and lack of integration between tools making the investigation and resolution process slow and resource-intensive. 

Organizations also face significant economic challenges in the way they are currently forced to purchase and deploy solutions. This leaves them unable to evolve quickly enough to meet the demands imposed by today’s fast-moving threat landscape and 24×7 network and application uptime requirements. 

In this series, we’ll explore each of these three challenges – Visibility, Agility and Economics – while also looking at how they are intrinsically inter-related. Understanding and addressing all of these challenges together revolutionizes network security and management, and enables organizations to realize greater efficiency while saving money.

Our next post will look at why organizations lack visibility into network activity and how they can overcome this challenge.


Packet Detectives Episode 1: The Case of the Retransmissions

Original Entry by : Michael Morris

Demystifying Network Investigations with Packet Data

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

As I talk to security analysts, network operations engineers and applications teams around the world a common theme regularly emerges: that troubleshooting security or performance issues with log or flow data alone just doesn’t cut it.

Most folks report spending way too many hours troubleshooting problems only to realize they just don’t have enough detail to know exactly what happened. Often this results in more finger pointing and unresolved issues. Too much time spent investigating issues also causes other alerts to start piling up, resulting in stress and undue risk to the organisation from a backlog of alerts that never get looked at.

On the other hand, those that use full packet capture data to troubleshoot problems report significantly faster resolution times and greater confidence because they can see exactly what happened on the wire.

Many folks I talk to also say they don’t have the expertise necessary to troubleshoot issues using packet data. But it’s actually much easier than you might expect. Packet decode tools – like Wireshark – are powerful and quite self-explanatory. And there’s tons of resources available on the web to help you out. You don’t need to be a mystical, networking guru to gain valuable insights from packet data!

Getting to the relevant packets is quick and easy too thanks to the EndaceProbe platform’s integration with solutions from our Fusion Partners like Cisco, IBM, Palo Alto Networks, Splunk and many others. Analysts can quickly pivot from alerts in any of those tools directly to related packet data with a single click, gaining valuable insights into their problems quickly and confidently.

To help further, we thought it would be useful to kick-off a video series of “real-world” investigation scenarios to show just how easily packet data can be used to investigate and resolve difficult issues (security or performance-related) in your network.

So here’s the first video in what we hope to make a regular series. Watch as industry-renowned SharkFest presenter and all-round Wireshark guru, Betty Dubois, walks us through investigating an application slow-down that is problems for users. The truth is in the packets …

We hope you find this video useful. Please let us know if you have ideas for other examples you’d like to see.


The Importance of Network Data to Threat Hunting (Part 3)

Original Entry by : Robert Salier

Frameworks and Regulations

By Robert Salier, Product Manager, Endace


Robert Salier, Product Manager, EndaceIn this, the third article in our series on threat hunting (see here for Part 1 and Part 2), we explore the frameworks and regulations most relevant to threat hunting.

These tend to fall into two categories: those that address cybersecurity at a governance level, and those that facilitate insight into individual attacks and help formulate appropriate defense actions.

Governance Level Frameworks and Regulations

The regulatory environment influences threat hunting, and cyber defense in general. In many countries, regulations impose obligations on disclosure of breaches, including what information must be provided, when, and to which stakeholders. This influences the information that an organization needs to know about a breach, and hence its choice of strategies, policies, processes and tools. These regulations generally require companies to disclose a breach to all customers that have been affected. However if an organization cannot ascertain which customers were affected, or even if any customers were affected, then they may need to contact every customer. The only thing worse than having to disclose a breach is having to disclose a breach without being able to provide the details your customers expect you to know.

There are a also a number of frameworks addressing cybersecurity at the governance level, which in some cases overlap with regulations, dealing with many of the same issues and considerations. Collectively, these frameworks and regulations help to ensure organizations implement good strategies, policies, processes and tools, e.g. …

  • Which systems and data is most important to the organization
  • What Information security policies should be in place
  • How cybersecurity should be operationalized (e.g. what organizational structure, security architecture and systems are most appropriate for the organization)
  • Incident management processes
  • Best practice guidelines

Prevalent frameworks and regulations include…

  • ISO 27000 Series of Information Security Standards
    A comprehensive family of standards for information security management, providing a set of best practices for information security management. Maintained by the International Standards Organization, it has been broadly adopted around the globe.
  • NIST Special Publication 800-53
    A catalogue of security and privacy controls for all U.S. federal organizations except those related to national security.
  • NIST Cybersecurity Framework
    A policy framework for private sector organizations to assess and improve their ability to prevent, detect, and respond to cyber attacks. It was developed for the USA, but has been adopted in a number of countries.
Frameworks to Characterize Attacks and Facilitate Responses

A number of frameworks have been developed to help describe and characterize attacker activity, and ultimately facilitate defense strategies and tactics.

Prevalent frameworks include…

  • Cyber Kill Chain
    Developed by Lockheed Martin, this framework was developed from a “kill chain” framework developed for military attack and defense. It decomposes a cyber attack into seven generic stages, providing a framework for characterizing and responding to attacks. Refer to this Dark Reading article for some discussion on the benefits and limitations of this framework.
  • Diamond Model
    This model describes attacks decomposing an attack into four key aspects, i.e. details of the adversary, their capabilities, the infrastructure they used, and the victim(s). Multiple attack diamonds can be plotted graphically in various ways including timelines and groupings, facilitating deeper insight.
  • Mitre Att&ck
    Developed by Mitre, Att&ck stands for “Adversarial Tactics, Techniques, and Common Knowledge”. It is essentially a living, growing knowledge base capturing intelligence gained from millions of attacks on enterprise networks. It consists of a framework that decomposes a cyber attack into eleven different phases, a list of techniques used in each phase by adversaries, documented real-world use of each technique, and a list of known threat actor groups. Att&ck is becoming increasingly popular, used by and contributed to by many security vendors and consultants.
  • OODA Loop
    Describes a process cycle of “Observe – Orient – Decide – Act”. Originally developed for military combat operations, it is now being applied to commercial operations.

The Importance of Network Data to Threat Hunting (Part 2)

Original Entry by : Robert Salier

Threat Hunting in Practice

By Robert Salier, Product Manager, Endace


Robert Salier, Product Manager, EndaceHunting for security threats involves looking for traces of attackers in an organization’s IT environment, both past and present. It involves creativity combined with (relatively loose) methodologies and frameworks, focused on outsmarting an attacker.

Threat Hunting relies on a deep knowledge of the Tactics, Techniques and Procedures (TTP’s) that adversaries use, and a thorough knowledge of the organization’s IT environment. Well executed threat hunts provide organizations with deeper insight into their IT environment and into where attackers might hide.

This, the second article in our series of blog posts on threat hunting (read Part 1 here), looks at how leading organizations approach threat hunting, and the various data, resources, systems, and processes required to threat hunt effectively and efficiently.

Larger organizations tend to have higher public profiles, more valuable information assets, and complex and distributed environments that present a greater number of opportunities for criminals to infiltrate, hide, and perform reconnaissance without detection. When it comes to seeking out best practice, it’s not surprising that large organizations are the place to look.

Large organizations recognize that criminals are constantly looking for ways to break in undetected and that it is only a matter of time before they succeed if they haven’t already. While organizations of all sizes are being attacked, larger organizations are the leaders in this proactive approach to hunting down intruders, i.e. “threat hunting”. They have recognized that active threat hunting increases detection rates over-relying on incident detection alone – i.e. waiting for alerts from automated intrusion detection systems that may never come.

Best practice involves formulating a hypothesis about what may be occurring, then seeking to confirm it. There are three general categories of hypothesis:

  • Driven by threat intelligence from industry news, reports, and feeds.
    e.g. newsfeeds report a dramatic increase in occurrences of a specific ransomware variant targeting your industry. So a threat hunt is initiated with the hypothesis that your organization is being targeted with this ransomware
  • Driven by situational awareness, i.e. focus on infrastructure, assets and data most important to the organization.
    e.g. a hypothesis that your customers’ records are the “crown jewels”, so hackers will be trying to gain access to exfiltrate this data

Having developed a hypothesis as a starting point, leading organizations rely on a range of tools and resources to threat hunt efficiently and effectively:

Historic Data from Hardware, Software and the Network
  • Infrastructure Logs from the individual components of hardware and software that form your IT environment, e.g. firewalls, IDS, switches, routers, databases, and endpoints. These logs capture notable events, alarms and other useful information, which when pieced together can provide valuable insight into historic activity in your environment. They’re like study notes that you take from a text book, i.e. highly useful, but not a full record, just a summary of what is considered notable. Also, be wary that hackers often delete or modify logs to remove evidence of their malicious activity.
  • Summarized network data (a.k.a. “packet metadata”, “network telemetry”). Traffic on network links can be captured and analyzed in real time to generate a feed of summary information characterizing the network activity. The information that can be obtained goes well beyond the flow summaries that Netflow provides, e.g. by identifying and summarizing activity and anomalies up to and including layer 7 such as email header information and expired certificates. This metadata can be very useful in hunts and investigations, particularly to correlate network traffic with events and activity from infrastructure logs, and users. Also, unlike logs, packet metadata cannot be easily deleted or modified.
  • Packet level network history. By capturing and storing packets from a network link, you have a verbatim copy of the communication over that link, allowing you to see precisely what was sent and received, with zero loss of fidelity. Some equipment such as firewalls and IDS’s capture small samples of packets, but these capture just a fraction of a second of communications, and therefore must be automatically triggered by a specific alarm or event. Capturing and storing all packets (“full packet capture”, “100% packet capture”) is the only way to obtain a complete history of all communications. Historically, the barriers to full packet capture have been the cost of the required storage and the challenge of locating the packets of interest, given the sheer volume of data. However, recent advances in technology are now breaking down those barriers.
Baselines

Baselines are an understanding of what is normal and what is anomalous.
Threat hunting involves examining user, endpoint, and network activity, searching for IoA’s and IoC’s – i.e. “clues” pointing to possible intrusions and malicious activity. The challenge is knowing which activity is normal, and which is anomalous. Without knowing that, in many cases, you will not know whether certain activity is to be expected in your environment, or whether it should be investigated.

A Centralized Location for Logs and Metadata

Because there are so many disparate sources of logs, centralized collection and storage is a practical necessity for organizations with substantial IT infrastructure. Most organizations use a SIEM (Security Information and Event Manager), which may have a dedicated database for storage of logs and metadata, or may use an enterprise data lake. SIEMs can correlate data from multiple sources, support rule-based triggers, and can feature Machine Learning algorithms able to learn what activity is normal (i.e. “baselining”). Having learned what is normal, they can then identify and flag anomalous activity.

Threat Intelligence

Threat intelligence is knowledge that helps organizations protect themselves against cyber attacks. It encompasses both business level and technical level detail. At a business level this includes general trends in malicious activity, individual breaches that have occurred, and how organizations are succeeding and failing to protect themselves. At a technical level, threat intelligence provides very detailed information on how individual threats work, informing organizations how to detect, block, and remove these threats. Generally this comes in the form of articles intended for consumption by humans, but also encompasses machine-readable intelligence that can be directly ingested by automated systems, e.g. updates to threat detection rules.

Frameworks and Regulations

The regulatory environment influences threat hunting, and cyber defense in general. In many countries, regulations impose obligations on disclosure of breaches, including what information must be provided, when, and to which stakeholders. There are a also a number of frameworks addressing cyber security at the governance level, which in some cases overlap with regulations, dealing with many of the same issues and considerations. Collectively, these frameworks and regulations help to ensure organizations implement good strategies, policies, processes and tools.

In the next article in this series, we explore the frameworks and regulations that apply to threat hunting, and which ensure organizations implement appropriate strategies, policies, processes and tools.


The Importance of Network Data to Threat Hunting (Part 1)

Original Entry by : Robert Salier

Introduction to Threat Hunting

By Robert Salier, Product Manager, Endace


Robert Salier, Product Manager, EndaceCriminal hackers are stealthy. They put huge efforts into infiltrating without triggering intrusion detection systems or leaving traces in logs and metadata … and often succeed. So you need to actively go searching for them. That’s why SecOps teams are increasingly embracing threat hunting.

This is the first in a series of blog articles where we discuss various aspects of threat hunting, and how visibility into network traffic can increase the efficiency and effectiveness of threat hunting. This visibility is often the difference between detecting an intruder, or not, and collecting the conclusive evidence you need to respond to an attack, or not.

In December 2015 Ukraine suffered from a power grid cyber attack that disrupted power distribution to the nation’s citizens. Thirty substations were switched off and damaged leaving 230,000 without power.

This attack was meticulously planned and executed, with the attackers having first gained access over six months before they finally triggered the outage. There were many stages of intrusion and attack, leaving traces that were only identified in subsequent investigations. Well planned and executed threat hunting would probably have uncovered this intruder activity, and averted the serious outages that took place.

This is a good example of why, in the last few years, threat hunting has been gaining substantial momentum and focus amongst SecOps teams, with increasing efforts to better define and formalize it as a discipline. You’ll see a range of definitions with slightly different perspectives, but the following captures the essence of Threat Hunting:

The process of proactively and iteratively searching through IT infrastructure to detect and isolate advanced threats that evade existing security solutions.

There’s also some divergence in approaches to threat hunting, and in the aspects that individual organizations consider most important, but key themes are:

  • To augment automated detection, increasing the likelihood that threats will be detected.
  • To provide insight into attackers’ Tactics, Techniques and Procedures (TTP’s) and hence inform an organization where they should focus their resources and attention.
  • To identify if, and where, automated systems need updating – e.g. with new triggers.

So, threat hunting involves proactively seeking out attacks on your IT infrastructure that are not detected by automated systems such as Intrusion Detection Systems (IDSs), firewalls, Data Leakage Prevention (DLP) and Endpoint Detection and Response (EDR) solutions. It’s distinct from incident response, which is reactive. It may, however, result in an incident response being triggered.

Although threat hunting can be assisted by machine-based tools, it is fundamentally an activity performed by people, not machines, heavily leveraging human intelligence, wisdom and experience.

In the next article, we explore how leading organizations approach threat hunting, and the various data, resources, systems, and processes required to threat hunt effectively and efficiently.

In the meantime, feel free to browse the Useful References page in our Theat Hunting Section on endace.com, which contains both a glossary and useful links to various pages related to threat hunting. Below are some additional useful references.

References

(1) Threat Hunting Report (Cyber Security Insiders), p22

(2) 2018 Threat Hunting Survey Results (SANS), p13

(3) 2018 Threat Hunting Survey Results (SANS), p5

(4) Improving the Effectiveness of the Security Operations Center (Ponemon Institute), p10

(5) The Ultimate Guide To Threat Hunting, InfoSec Institute

 


The Importance of Network Data to Threat Hunting (Part 4)

Original Entry by : Robert Salier

How Endace Accelerates Threat Hunting

By Robert Salier, Product Manager, Endace


Robert Salier, Product Manager, Endace

Despite having a variety of tools at their disposal, many organizations still struggle with detecting and investigating security threats effectively and efficiently.  Inevitably, some threats are not detected because skilled hackers expend a great deal of effort avoiding security monitoring systems and removing the evidence of their activity by deleting or modifying logs and files. Even when threats are detected, organizations often lack sufficient visibility to ascertain the exact scope and nature of the threat: to be certain they have completely removed it and to be totally confident they can detect and prevent a recurrence.

This is the final post in our series on threat hunting (see here for part 1, part 2 and part 3).

In this post I take a look at how the EndaceProbe Analytics Platform can accelerate threat hunting: delivering deeper insight into network activity through rich network data that provides an independent and unadulterated view of activity in your environment.  It also explains how the EndaceProbe’s open platform approach delivers significant productivity and cost benefits, breaking down traditional barriers to affordability and practicality.

Full Packet-Level Capture of Network History

Skilled hackers (and clever malware) routinely delete or modify logs and files containing traces of their malicious activity.  However, it’s virtually impossible for them to remove traces of their presence from the traffic that traverses the network.  So, monitoring, capturing and analyzing network traffic is often the difference between being able to detect an intruder, and not, and collecting the conclusive evidence you need to address the threat, or not.

When malicious activity is detected, the next challenge is to obtain a clear picture of what has occurred.  This is critical for several reasons. Firstly, enterprises have regulatory or policy obligations such as complying with information security standards and breach disclosure regulations. Secondly, it’s critical to be able to keep stakeholders – including executive management, PR, Legal, HR suppliers, partners, and customers – informed and be able to accurately answer any questions.  And last, but not least, having a clear, unambiguous picture of what has occurred is also essential to be able to confirm that the threat has been neutralized and for you to be confident that sufficient measures are put in place to prevent a re-occurrence.

As discussed in Part 2 of this series, log files and other data sources such as flow-based network data can provide valuable insight into activity. And they might enable you to detect a threat. The problem is these data sources often don’t contain sufficient detail to enable a clear picture of exactly what happened, how it happened and what the impact is. Server and firewall logs, for example, might reveal communication between a host on your network and a malicious external host. But they can’t tell you what the actual contents of that communication were.

Capturing and storing packet history, on the other hand, gives you a verbatim copy of communications over the network, allowing you to see precisely what was sent and received with zero loss of fidelity. Packets contain all the contents: allowing accurate reconstruction of the entire conversation including file and document contents, web page interactions, emails, audio and video streams, etc.

Research report from EMA identifies packet capture as a key enabler for stronger security

Enterprise Management Associates (EMA) surveys enterprises annually to report on the strategies leading organizations are adopting to strengthen their cyber defenses. In the 2019 edition of “Unlocking High Fidelity Security”, packet capture was highlighted as a key enabler of stronger cybersecurity.

 

Download a Free Copy

 

Open Platform Approach

EndaceProbes can host a range of third party security solutions including Intrusion Detection Systems, virtual next-gen firewalls, AI-based security tools, and many other commercial, open-source or custom security and network or application performance monitoring solutions.  Because each EndaceProbe can host multiple tools, you only need to purchase and deploy packet capture hardware once.  You then have the freedom to choose best-of-breed tools, and the agility to quickly deploy new and/or updated tools without changing the underlying hardware platform.

Threat hunters can also dramatically accelerate and streamline investigations thanks to pre-built integrations between EndaceProbes and many third-party tools.  These integrations enable analysts to click on an alarm/event in any of these tools to quickly retrieve and analyze the related full packet data that is recorded on the EndaceProbes on the network.

For more details check out The Benefits of an Open Analytics Platform.

Breakthrough density and affordability

We’re very proud of our breakthrough density and price per petabyte, putting a month or more of network history within reach of many more organizations.  Our EP-9200 EndaceProbes provide 40Gbps packet capture and built-in investigation tools, hosting capacity for up to 12 applications, and a petabyte of network history storage, all in a single appliance just four rack units high.

How do we do it?  Well, it’s not just an efficient organization and economies of scale.  We have smart engineers implementing proprietary hardware, real-time storage compression, and features such as our patented Smart Truncation™.  For more, check out https://www.endace.com/endaceprobe.

Breakthrough practicality

We realize that storing network history is of limited use if it is too difficult, expensive or time consuming to extract value from it.  We knew we had to provide a way to…

  • Centrally manage estates of EndaceProbes that may be global in scale to reduce the operational cost and minimize management overheads.
  • Enable SecOps, NetOps and IT teams to quickly and easily find packets of interest from within terabytes or petabytes of data that may be distributed across a global network. And do this from a central point without having to figure out where those packets were recorded or which EndaceProbe they are stored on.
  • Meet the needs of large, complex, globally distributed networks, with the ability to scale to provide virtually unlimited storage capacity and monitor links of any speed.

So we developed the EndaceFabric™ architecture.

EndaceFabric allows multiple EndaceProbes to be deployed at various points throughout a network and seamlessly connected to form a network-wide packet capture, recording and hosting fabric.  Analysts can perform investigations and search and mine recorded Network History across multiple EndaceProbes simultaneously from a single UI.  Similarly, administrators can centrally manage estates of hundreds of connected EndaceProbes making it easy to configure, update and monitor the health and performance of the entire estate.

EndaceFabric provides more than a single pane of glass for administration, search and data-mining however.  The architecture also allows EndaceProbes to be stacked or grouped to create logical EndaceProbes capable of capturing traffic at practically any line rate with no limits to storage capacity scalability.

EndaceFabric is also the key to amazingly fast searches for packets of interest.  Due to the inherently distributed, parallel architecture, and our advanced search algorithms, search times remain constant regardless of the number of EndaceProbes involved.  A needle-in-a-haystack search for specific packets-of-interest across a hundred EndaceProbes and a hundred petabytes of network history can take just seconds.

For more details, check out https://www.endace.com/EndaceFabric, our videos describing the EndaceFabric architecture, and a demo showing our amazingly fast search.

And finally

This was the final article in our series on threat hunting, and how the Endace Analytics Platform can increase the efficiency and conclusiveness of threat hunts. We hope you found it useful?

If you’d like to find out more, please don’t hesitate to reach out to your local Endace representative, or contact us at https://www.endace.com/contact.


Watch Endace on Cisco ThreatWise TV from RSA 2019

Original Entry by : Endace

It was a privilege to attend this year’s RSA cybersecurity event in San Francisco, and one of our top highlights was certainly the opportunity to speak to Cisco’s ThreatWise TV host Jason Wright. Watch the video on Cisco’s ThreatWise TV (or below) as Jason interviews our very own Michael Morris to learn more about how Cisco and Endace integrate to accelerate and improve cyber incident investigations.

In this short 4 minute video, Michael demonstrates how Cisco Firepower and Stealthwatch can be used together to investigate intrusion events, using Cisco dashboards and EndaceVision to drill down into events by priority and classification to show where threats come from, who has been affected and whether any lateral movement occurred, as well as conversation history and traffic profiles. Michael also explains how Cisco and Endace work together to ‘find a needle in a haystack’ across petabytes of network traffic.

A big thanks to Cisco and to Jason for giving us this spotlight opportunity. If you have any questions about how Cisco and Endace integrations can accelerate and improve cyber incident investigation, visit our Cisco partner page.