Endace Packet Forensics Files: Episode #47

Original Entry by : Michael Morris

Michael talks to network forensics and incident response specialist, Jasper Bongertz.

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

What are some of the challenges of responding to a serious incident – such as a ransomware attack or advanced persistent attack? Where do you start, and what are the critical things you need to do?

In this episode we are lucky to welcome Jasper Bongertz, Head of Digital Forensics and Incident Response at G DATA Advanced Analytics in Germany. Jasper has a wealth of experience from working in the front line of incident response at G DATA as well as in his previous role at Airbus. He also has a long background in network forensics – having been a Wireshark and network forensics instructor, and continues to be a very active member of the Wireshark community.

Jasper starts by outlining some of the steps to mitigate “headless chicken mode” which is what he often sees when organization first encounters a serious incident.

The process starts with understanding exactly what has happened, and what the impact is so that a clear response plan and timeline for resolution can be established. This requires gathering the available evidence – including network packet data if it’s available. It’s important to be able to do this quickly – particularly in the case of ransomware attacks where the organization’s IT systems may be unavailable as a result of the attack. With ransomware, speed is crucial since the organization’s primary priority is typically to get back to an emergency operating state as quickly as possible. Jasper lists some of the tools that his team finds useful in rapidly gathering that critical evidence.

Once the scope of the incident has been established, you need to have the specific expertise on hand to investigate and understand what happened and how it happened so you can identify the right response. Typically, Jasper says, that will involve having at least an incident response specialist, a forensic expert, and a malware reverse engineer, but depending on the scale of the event may involve many others too.

Jasper outlines the most important steps organizations can take to protect themselves against ransomware attacks and ensure that in the event of a successful attack they can recover. The two most important of these being to make sure domain administrator credentials are protected to prevent privilege escalation and ensuring your backups are complete and protected from sabotage.

Lastly, Jasper discusses the changing cyberthreat landscape. He outlines why he thinks data exfiltration and extortion will become more common than ransomware and encryption, and why network data is critical to combat this growing threat.

Other episodes in the Secure Networks video/audio podcast series are available here. Or listen to the podcast here or on your favorite podcast platform.


Endace Packet Forensics Files: Episode #46

Original Entry by : Michael Morris

Michael talks to Gerald Combs, Wireshark Founder, and Stephen Donnelly, Endace CTO

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

How did Wireshark come to be, and what’s made it so successful – not just as the pre-eminent tool for analyzing network packet data, but as an open-source project in general?

In this episode I talk to Wireshark founder, Gerald Combs, and Endace CTO, Stephen Donnelly, about the origins of Wireshark, and why packet capture data is so crucial for investigating and resolving network security threats and network or application performance issues.

Gerald talks about the early days of Ethereal, a “packet sniffer” he originally created for his own use in his role at an ISP, but subsequently open-sourced as Wireshark. That fortuitous decision was key, Gerald says, to the subsequent ongoing growth and success of the Wireshark project – which will turn 25 years old in July! It enabled developers from around the world to contribute to the project, creating a Windows version in the process, and helping Wireshark to become the gold standard tool for network analysis, used by SecOps, NetOps and IT teams the world over.

Stephen has been using Wireshark right from the earliest days – when it was still called Ethereal – and is one of the many contributors to the project.Stephen and Gerald both talk about why packet analysis is so important for cybersecurity and network performance analysis (the ubiquitous “Packets Don’t Lie” T-shirt – available from the Wireshark Foundation store – says it all really), and discuss examples of the many and varied problems that Wireshark is helping people to solve.

Stephen outlines the differences between network flow data and packet capture data and why packet data is essential for solving some problems where flow data just doesn’t contain the level of detail required.

Wireshark is continually evolving, with support for new protocols, and new UI enhancements that make it easier for analysts to slice-and-dice packet data. Gerald says that Wireshark is almost the perfect open-source project because it allows for a lot of parallel collaboration from contributors in creating new dissectors and ensuring that Wireshark continues to keep pace with the rapid pace of change in networking. Now that planning for Wireshark 5.x has started Gerald also looks ahead to some of the possible new features that might appear in future releases.

And finally, Gerald talks about the new Wireshark Foundation (which Endace is a sponsor of) which has been setup to provide support for ongoing development of the Wireshark project and ensure it continues its resounding success into the future.

Wireshark is coming up on its 25th birthday and still going from strength-to-strength. Don’t miss this fascinating interview with the leader of one of the most successful open-source projects around. Gerald and Stephen’s insightful commentary as well some fantastic tips-and-tricks make this a must-watch episode.

Other episodes in the Secure Networks video/audio podcast series are available here. Or listen to the podcast here or on your favorite podcast platform.


Endace Packet Forensics Files: Episode #45

Original Entry by : Michael Morris

Michael talks to Dimitri McKay, Principal Security Strategist and CISO Advisor at Splunk

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

Increasingly complex systems, expanding threat landscape, and explosion in the number of potential entry points all make managing security at scale a daunting prospect. So what can you do to implement effective security at scale and what are some of the pitfalls to avoid?

In this episode I talk with Dimitri McKay, Principal Security Strategist and CISO Advisor at Splunk, about where to start addressing the challenges of security at scale. He highlights the importance of robust risk assessment, developing clear security goals and ensuring leadership buy-in to the organization’s security strategy. And the importance of balancing the needs of users with the need to secure the enterprise.

Dimitri discusses some of the pitfalls that organizations often fall into, and what security leaders can do – and where they should start – to avoid making the same mistakes. He talks about the importance of thinking strategically not just tactically, of being proactive rather than just reactive, and of creating a roadmap for where the organization’s security needs to be in a year, two years, three years into the future.

Dimitri also highlights the need to collect the right data to ensure the organization can accomplish the security goals it has set, to enable high-fidelity threat detection and provide the necessary context for effective, and efficient, threat response. Security teams started by collecting what they had he says – firewall logs, authentication logs etc. – but this isn’t necessarily sufficient to enable them to accomplish their objectives because it focuses more on IT risks, rather than on the critical business risks.

Finally, Dimitri puts on his futurist hat to predict what security teams should be on the look out for. Not surprisingly, he predicts the rapid development of AI tools like ChatGPT and OpenAI has huge potential benefits for cyber defenders. But these tools will also enable cyber attackers to create increasingly sophisticated threats and circumvent defences. AI is both an opportunity and a threat.

Other episodes in the Secure Networks video/audio podcast series are available here. Or listen to the podcast here or on your favorite podcast platform.


Endace Packet Forensics Files: Episode #44

Original Entry by : Michael Morris

Michael talks to David Monahan, Business Information Security Officer and former security researcher.

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

Cyberthreats are something all organizations are facing. But Pharmaceutical and Healthcare Providers have some unique challenges and vulnerabilities and come in for more than their fair share of attention from threat actors. What can your SOC team learn from some of the best practices these organizations are implementing? Are you architecting your environment to separate IOT devices from other critical assets and are you managing them with the same level of scrutiny?

In this episode I talk with David Monahan, a 30-year expert in cybersecurity and network management and former researcher at Enterprise Management Associates. David draws on his research background as well as his current experience working as the Business Information Security Officer at a large global pharmaceutical company.

He talks about some of the similarities and differences the Healthcare and Pharmaceutical industries have with other industries. He shares his insights into why the Healthcare and Pharmaceutical industries are so strongly targeted by threat actors and things consumers or patients can do to help protect themselves and their information.

David also discusses some of the unique challenges Healthcare organizations have around IOT devices and suggests ways to help manage these risks.  He shares some best practices your security organization can be leveraging and points out tools and solutions that are critical for any security stack.

Finally, David talks about what training and skills are important to ensure your SOC analysts are as prepared as possible to defend against cyberthreats.

Other episodes in the Secure Networks video/audio podcast series are available here. Or listen to the podcast here or on your favorite podcast platform.


Endace Packet Forensics Files: Episode #43

Original Entry by : Michael Morris

Michael talks to Jim Mandelbaum, Field CTO at Gigamon

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

As workloads move to the cloud, and infrastructure becomes increasingly complex, how can you ensure that your security posture evolves accordingly? It’s essential to ensure visibility across the entire network if you are to secure it effectively.

In this episode of the Endace Packet Forensic files, I talk with Jim Mandelbaum, Field CTO at Gigamon, about what “security at scale” means. Jim draws on more than a decade of experience as a CTO in the security industry, and shares best-practise tips to ensure that as your infrastructure evolves, your security posture keeps pace.

Jim highlights the importance of leveraging automation to help deal with the increasingly complex network environment. Key to this is having visibility into exactly what’s happening on your network – including on-prem, cloud and hybrid-cloud environments – so you can make informed decisions about what traffic needs to be monitored and recorded. And what tasks can be automated to ensure threat visibility.

It’s also critical to break down team silos, Jim says. Otherwise, responsibility has a tendency to fall through the cracks. Teams need to collaborate closely, and include the security team on IT strategy planning and particularly cloud migration projects. That makes it easier to determine who is responsible for what parts of security from the get-go. When teams have the opportunity to discuss the challenges they face they can often leverage solutions that have been successfully implemented elsewhere in the organization – saving time, resources and budget as a result.

Lastly, Jim highlights the importance of talking with your vendors about their future product strategies to ensure they align with your organization’s plans. Otherwise, there’s a risk of divergence which could prove very costly down the track.

Other episodes in the Secure Networks video/audio podcast series are available here. Or listen to the podcast here or on your favorite podcast platform.


Network Security and Management Challenges Blog Series – Part 4

Original Entry by : Endace

Driving Economic Efficiency in Cyber Defense

Key Research Findings

  • Available budget, freedom to choose the best solutions and platform fatigue are all impacting on the ability of system architects to design and deploy the best solutions to meet the organization’s needs.
  • 78% of system architects reported platform fatigue is a significant challenge with 29% rating the level of challenge as high.
  • More than 90% of respondents reported that the process of acquiring and deploying security, network or application performance platforms is challenging, with almost half reporting that it is either extremely or very challenging.

Most of what’s written about cybersecurity focuses on the mechanics of attacks and defense. But, as recent research shows, the economics of security is just as significant. It’s not just lack of available budget – departments always complain about that – but how they are forced to allocate their budgets.

Currently, security solutions are often hardware-based, which forces organizations into making multiple CAPEX investments – with accompanying complex, slow purchase processes.

More than three-quarters of respondents to the survey reported that “the challenge of constraints caused by CAPEX cycle (e.g. an inability to choose best possible solutions when the need arises) is significant.”Almost half reported being stuck with solutions that have “outlived their usefulness, locked into particular vendors or unable to choose best-of-breed solutions.

Speed of deployment is also a significant challenge for organizations, with more than 50% of respondents reporting that “deploying a new security, network or application performance platform takes six to twelve months or longer.” 

As outlined in the previous post, existing security solutions are expensive, inflexible, hardware-dependent and take too long to deploy or upgrade. The process of identifying a need, raising budget, testing, selecting and deploying hardware-based security and performance monitoring solutions simply takes too long. And the cost is too high.

Contrast this with cyber attackers, who don’t require costly hardware to launch their attacks. They are not hampered by having to negotiate slow, complex purchase and deployment cycles. And often they leverage their target’s own infrastructure for attacks. The truth is that the economics of cybersecurity is broken: with the balance radically favoring attackers at the expense of their victims.

Reshaping the economics of cyberdefense

Companies have a myriad of choices when it comes to possible security, network performance and application performance monitoring solutions. Typically, they deploy many different tools to meet their specific needs. 

As discussed in the previous post, the lack of a common hardware architecture for analytics tools has prevented organizations from achieving the same cost savings and agility in their network security and monitoring infrastructure that virtualization has enabled in other areas of their IT infrastructure. As a result, budgets are stretched, organizations don’t have the coverage they’d like (leading to blindspots in network visibility) and deploying and managing network security and performance monitoring tools is slow, cumbersome and expensive.

Consolidating tools onto a common hardware platform – such as our EndaceProbe – helps organizations overcome many of the economic challenges they face:

  • It lets them reduce their hardware expenditure, resulting in significant CAPEX and OPEX savings. 
  • Reduced hardware expenditure frees up budget that can be directed towards deploying more tools in more places on the network – to remove visibility blind spots – and deploying tools the company needs but couldn’t previously afford.
  • Teams gain the freedom to choose what tools they adopt without being locked into “single-stack” vendor solutions. 
  • Teams can update or replace security and performance monitoring functions by deploying software applications on the existing hardware platform without a rip-and-replace. This significantly reduces cost and enables much faster, more agile deployment.

The cost of the hardware infrastructure needed to protect and manage the networks can also be shared by SecOps, NetOps, DevOps and IT teams, further reducing OPEX and CAPEX costs and facilitating closer cooperation and collaboration between teams.

For architects, a common hardware platform becomes a network element that can be designed into the standard network blueprint – reducing complexity and ensuring visibility across the entire network. And for IT teams responsible for managing the infrastructure it avoids the platform fatigue that currently results from having to manage multiple different hardware appliances from multiple different vendors.

Because analytics functionality is abstracted from the underlying EndaceProbe hardware, that functionality can be changed or upgraded easily, enabling – as we saw in the last post – far more agile deployment and the freedom to deploy analytics tools that best meet the company’s needs rather than being locked into specific vendors’ offerings.

Equally importantly, it extends the useful life of the EndaceProbe hardware too. No longer does hardware have to be replaced in order to upgrade or change analytics functionality. And as network speeds and loads increase, older EndaceProbes can be redeployed to edge locations and replaced at the network core with newer models offering higher-speeds and greater storage density. This ensures companies get maximum return on their hardware investment.

Lastly, their modular architecture allows multiple, physical EndaceProbes to be stacked or grouped to form centrally-managed logical EndaceProbes capable of scaling to network speeds of hundreds of gigabits-per-second and storing petabytes of network history.

A Final Word

This blog series has looked at the three key challenges – Visibility, Agility and Economic Efficiency (this post) – that enterprises report they face in protecting their networks and applications from cyber threats and costly performance issues. These challenges are interrelated: it is only by addressing all three that organizations can achieve the level of confidence and certainty necessary to effectively protect their critical assets.


Network Security and Management
Challenges – Part 3: Agility

Original Entry by : Endace

The Need for Agile Cyberdefense – and How to Achieve it

Key Research Findings

  • 75% of organizations report significant challenges with alert fatigue and 82% report significant challenges with tool fatigue
  • 91% of respondents report significant challenges in “integrating solutions to streamline processes, increase productivity and reduce complexity”.
  • Investigations are often slow and resource-intensive, with 15% of issues taking longer than a day to investigate and involving four or more people in the process.

In part two of this series of blog posts, we looked at Visibility as one of the key challenges uncovered in the research study Challenges of Managing and Securing the Network 2019.

In this third post, we’ll be discussing another of the key challenges that organizations reported: Agility

From a cybersecurity and performance management perspective, the term “Agility” can mean two different things. In one sense it can mean the ability to investigate and respond quickly to cyber threats or performance issues. But it can also refer to the ability to rapidly deploy new or upgraded solutions in order to evolve the organization’s ability to defend against, or detect, new security threats or performance issues. 

To keep things clear let’s refer to these two different meanings for agility as “Agile Response” and “Agile Deployment.”

Enabling Agile Response

In the last post, we looked at the data sources organizations can use to improve their visibility into network activity – namely using network metadata, combined with full packet data, to provide the definitive evidence that enables analysts to quickly and conclusively investigate issues. 

In order to leverage this data, the next step is to make it readily available to the tools and teams that need access to it. Tools can access the data to more accurately detect issues, and teams get quick and easy access to the definitive evidence they need to investigate and resolve issues faster and more effectively. 

Organizations report that they are struggling with two significant issues when it comes to investigating and resolving security or performance issues. 

The first is they are drowning in the sheer volume of alerts being reported by their monitoring tools. Investigating each issue is a cumbersome and resource-intensive process, often involving multiple people. As a result there is typically a backlog of issues that never get looked at – representing an unknown level of risk to the organization.

The second issue, which is compounding the alert fatigue problem, is that the tools teams use are not well-integrated, making the investigation process slow and inefficient.  In fact, 91% of the organizations surveyed reported significant challenges in “integrating solutions to streamline processes, increase productivity and reduce complexity.” The result is analysts are forced to switch from tool to tool (also known as “swivel chair integration”) to try and piece together a “big-picture” view of what happened.

Integrating network metadata and packet data into security and performance monitoring tools is a way to overcome both these challenges:

  • It gives teams access to a shared, authoritative source of truth about network activity. Analysts can pivot from an alert, or a metadata query, directly to the related packets for conclusive verification of what took place. This simplifies and accelerates investigations, making teams dramatically more productive and eliminating alert fatigue.
  • It enables a standardized investigation process. Regardless of the tool an analyst is using, they can get directly from an alert or query to the forensic detail – the packets – in the same way every time. 
  • It enables data from multiple sources to be correlated more easily. This is typically what teams are looking to achieve through tighter tool integration. Network data provides the “glue” (IP addresses, ports, time, application information etc.) that enables data from other diverse sources (log files, SNMP alerts etc.) to be correlated more easily. 

By leveraging a common, authoritative source of packet-level evidence organizations can create a “community of interoperability” across all their security and performance monitoring tools that drives faster response and greater productivity.

By integrating this packet-level network history with their security tools, SecOps teams can pivot quickly from alerts to concrete evidence, reducing investigation times from hours or days to just minutes.

Endace’s EndaceProbe Analytics Platform does this by enabling solutions from leading security and performance analytics vendors – such as BluVector, Cisco, Darktrace, Dynatrace, Micro Focus, IBM, Ixia, Palo Alto Networks, Splunk and others – to be integrated with and/or hosted on the EndaceProbe platform. Hosted solutions can access analyze live packet data for real-time detection or analyze recorded data for back-in-time investigations. 

The EndaceProbe’s powerful API-based integration allows analysts to go from alerts in any of these tools directly to the related packet history for deep, contextual analysis with a single click. 

The Road to Agile Deployment

The research showed that many organizations report their lack of visibility is due to having “too few tools in too few places in the network.” There are two reasons for this. One is economic – and we’ll look at that in the next post. The other is that the process of selecting and deploying new security and performance monitoring solutions is very slow.

The reason deploying new solutions is so slow is that they are typically deployed as hardware-based appliances. And as we all know, the process of acquiring budget for, evaluating, selecting, purchasing and deploying hardware can take months. Moreover, appliance-based solutions are prone to obsolescence and are difficult or impossible to upgrade without complete replacement. 

All these things make for an environment that is static and slow-moving: precisely the opposite of what organizations need when seeking to be agile and evolve their infrastructure quickly to meet new needs. Teams cannot evolve systems quickly enough to meet changing needs – which is particularly problematic when it comes to security, because the threat landscape changes so rapidly. As a result, many organizations are left with security solutions that are past their use-by date but can’t be replaced until their CAPEX value has been written down.

The crux of the problem is that many analytics solutions rely on collecting and analyzing network data – which means every solution typically includes its own packet capture hardware. 

Unlike the datacenter, where server virtualization has delivered highly efficient resource utilization, agile deployment and significant cost savings, there isn’t – or rather hasn’t been until now – a common hardware platform that enables network security and performance analytics solutions to be virtualized in the same way. A standardized platform for these solutions needs to include the specialized, dedicated hardware necessary for reliable packet capture and recording at high speed.

This is why Endace designed the EndaceProbe™ Analytics Platform. Multiple EndaceProbes can be deployed across the network to provide a common hardware platform for recording full packet data while simultaneously hosting security and performance analytics tools that need to analyze packet data. 

Adopting a common hardware platform removes the hardware dependence that currently forces organizations to deploy multiple hardware appliances from multiple vendors and frees them up to deploy analytics solutions as virtualized software applications. This enables agile deployment and gives organizations the freedom to choose the security, application performance and network performance solutions that best suit their needs, independent of the underlying hardware.

In the next post, we’ll look at how a common platform can help address some of the economic challenges that organizations face in protecting their networks. 


Network Security and
Management Challenges – Part 2: Visibility

Original Entry by : Endace

Stop Flying Blind: How to ensure Network Visibility

Network Visibility Essential to Network Security

Key Research Findings

  • 89% of organizations lack sufficient visibility into network activity certain about what is happening.
  • 88% of organizations are concerned about their ability to resolve security and performance problems quickly and accurately.

As outlined in the first post in this series, lack of visibility into network activity was one of the key challenges reported by organizations surveyed by VIB for the Challenges of Managing and Securing the Network 2019 research study. This wasn’t a huge surprise: we know all too well that a fundamental prerequisite for successfully protecting networks and applications is sufficient visibility into network activity. 

Sufficient visibility means being able to accurately monitor end-to-end activity across the entire network, and recording reliable evidence of this activity that allows SecOps, NetOps and DevOps teams to react quickly and confidently to any detected threats or performance issues. 

Context is Key

It might be tempting to suggest that lack of network visibility results from not collecting enough data. Actually, the problem is not possessing enough of the right data to provide the context that enables a coherent big-picture view of activity – and insufficient detail to enable accurate event reconstruction. This leaves organizations questioning their ability to adequately protect their networks.

Without context, data is just noise. Data tends to be siloed by department. What is visible to NetOps may not be visible to SecOps, and vice versa. It is often siloed inside specific tools too, forcing analysts to correlate data from multiple sources to investigate issues because they lack an independent and authoritative source of truth about network activity. 

Typically, organizations rely on data sources such as log files, and network metadata, which lack the detailed data necessary for definitive event reconstruction. For instance, while network metadata might show that a host on the network communicated with a suspect external host, it won’t give you the full details about what was transferred. For that, you need full packet data. 

In addition, network metadata and packet data are the only data sources that are immune to potential compromise. Log files and other data sources can be tampered with by cyber attackers to hide evidence of their presence and activity; or may simply not record the vital clues necessary to investigate a threat or issue.

Combining Network Metadata with Full Packet Data for 100% Visibility

The best possible solution to improving visibility is a combination of full packet data and rich network metadata. Metadata gives the big picture view of network activity and provides an index that allows teams to quickly locate relevant full packet data. Full packet data contains the “payload” that lets teams reconstruct, with certainty, what took place.

Collecting both types of data gives NetOps, DevOps and SecOps teams the information they need to quickly investigate threats or performance problems coupled with the ability to see precisely what happened so they know how to respond with confidence.

This combination provides the context needed to deliver both a holistic picture of network activity and the detailed granular data required to give certainty. It also provides an independent, authoritative source of network truth that makes it easy to correlate data from multiple sources – such as log files – and validate their accuracy.

With the right evidence at hand, teams can respond more quickly and accurately when events occur. 

In the next post in this series, we’ll look at how to make this evidence easily accessible to the teams and tools that need it – and how this can help organizations be more agile in responding to security threats and performance issues.