Endace Packet Forensics Files: Episode #34

Original Entry by : Mark Evans

Michael talks to Rick Peters, CISO Operational Technology, Fortinet

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

Increasingly, the security of Operational Technology (OT) – Industrial Control Systems and SCADA – is a major focus of concern. These systems are used in many environments across industries such as manufacturing, transportation, energy, critical infrastructure and many more, and are a juicy target for both sophisticated, nation-state attackers and cybercriminals.

In this episode of the Endace Packet Forensic files I talk with Rick Peters, CISO Operational Technology at Fortinet. With a long career in engineering and almost four decades in US Intelligence before taking on his role at Fortinet, Rick knows intimately how attackers can target OT systems and has spent many years helping to defend OT systems from cyber attackers.

Rick talks about the importance of being able to trust in OT environments: in their ability to continue to provide safe and continuous business, and how we can bring some of the discipline that has been developed in IT cyberdefense into the OT environment. He outlines the importance of “consequence-driven strategy” – a deep understanding of the risks and vulnerabilities that a given system presents, coupled with a thorough assessment of the consequences of a successful compromise. As well as the importance of having a well-planned, and tested, response plan that addresses both IT and OT systems.

Rick has some great advice for cybersecurity leaders about where to start building a robust OT security posture and the importance of having IT security and OT security working in parallel. You won’t want to miss this episode!

Other episodes in the Secure Networks video/audio podcast series are available here.



Successful Endace 2021/22 Internship Program concludes for another year

Original Entry by : Katrina Schollum

Our six interns for our Summer 2021/22 Internship Program joined us in our R&D centre in Hamilton, NZ from the Universities of Auckland and Waikato. Their 13 week R.E.A.L (Remarkable, Enjoyable, Authentic, Learning) Internship Program  saw them working individually on commercially relevant, meaningful projects with the support of their managers and mentors.  We are pleased to say it was another highly successful year!

2021/22 Endace Interns working in the Hamilton, NZ office

Presentations Day

Because of Covid lockdowns, the interns’ introduction to Endace was virtual this year – and so too were their final presentations.

The Internship Program concluded with each of the interns presenting their individual projects to an audience.  This year the audience included Endace team members from five countries: project managers and mentors as well as all the members of our Senior Leadership Team.  We were also very happy to welcome faculty members from the University of Waikato, continuing our strong link with the original birthplace of Endace – very appropriate in our 20th year!

The interns gave an overview of their projects and the specific challenges they were trying to address. They discussed the design of their solutions, implementation challenges they had faced, and also demonstrated their solutions in action. They concluded by outlining how these projects could be applied – and potentially extended further – in the future. At the end of each presentation, audience members had an opportunity to ask questions and delve deeper into the outcomes of the project.

Elements of Success

Throughout Endace’s structured Internship Program, interns get to hone their technical skills and put their university knowledge into practice.  But beyond just acquiring technical skills, interns also have an opportunity to gain an understanding of all the different areas of Endace’s business – from sales and marketing, to finance and operations. They also get to develop their communication and organisational skills by interacting with members of the Endace team from many departments.

The interns are supported throughout the Internship Program by individual managers and mentors. They get to observe how teams work together cohesively – in an environment where ideas are respected and individuals are trusted to do their best work. It was fantastic to see these learnings reflected in the intern’s final presentations.

2021/22 Endace Interns working in the Hamilton, NZ office

Our managers and mentors also benefit hugely from the Internship program – which provides a great opportunity to build leadership skills in their intern support roles and gives them the satisfaction of seeing the impact of sharing their expertise.

Following the presentations, Stuart Wilson, Endace’s CEO, summed up everybody’s thoughts when he said “it constantly amazes me how much interns can achieve in a relatively short period of time!”  He emphasised the importance of Endace’s determination that intern projects should be real, commercially-focused projects – and talked about how the intern projects have helped shape product improvement, automation, being able to scale our testing environments and customer experience for Endace.

Endace’s CTO, Stephen Donnelly, commented that an important outcome of the Endace Internship Program is that it supports the wider R&D sector and helps New Zealand prepare future engineers with exposure to cutting edge cybersecurity technology.  Cybersecurity is an increasingly important industry world-wide, and increasing students familiarity with key challenges, tools and technologies is vital in upskilling the NZ sector.

At Endace we are proud of our interns’ achievements thus far and look forward to following their future accomplishments in the industry.  As we conclude another successful program we will now look forward to the next round in Spring, bringing in further perspectives, learning and career development to Endace.


Making Packet Forensics Easy

Original Entry by : Cary Wright

Extracting files and other information from recorded packet data

By Cary Wright, VP Product Management, Endace


Cary Wright, VP Product Management, EndaceRecorded network traffic often holds vital clues required to resolve serious Cyber Incidents, or difficult network or application issues. The challenge has been locating a packet guru with the skills to search and analyse recorded traffic to extract the vital evidence needed to resolve the issue at hand. Such skilful analysts can be a rare breed, so we have taken that expertise and packaged it into our latest EndaceProbe software.

Recorded network traffic is now faster to search from within existing security tools such as SIEM or SOAR, and extraction of files and other important information can be done by any team member with the click of a mouse.

Getting to the Packets Faster

Our integrations with partner solutions focus on making it quicker and easier for analysts to find and analyze the packet data they need to investigate and resolve incidents.

Analysts can go from an issue or alert in their security or performance monitoring tools directly to the related packet data in InvestigationManager™ with a click of the mouse. That can save hours of time extracting, downloading and carving-up massive .pcap files so they can be opened up in Wireshark®.

With EndaceVision, analysts can rapidly zoom the timeline in-and-out to look at pre-cursor or post event activity to understand the full scope of any event or alert. Analysis of packet data is done on EndaceProbe appliances at the place it was recorded using hosted Wireshark without having to download or transfer large .pcap files across your network.

Making packet data even more useful

In the past packet analysis has required deep expertise and experience with tools like Wireshark or Zeek used to extract essential information from the recorded packet data. This has made it difficult for less experienced analysts to extract value from packet data and often meant issues requiring packet forensics piled up on the desks of senior analysts to investigate.

With our latest software release (OSm 7.1), we’ve made it easy for even junior analysts to extract useful information from recorded packet data without requiring deep knowledge of packet structures and decode tools. Simply select traffic of interest in EndaceVision and with a single click extract malicious files, or generate detailed log data from all the selected packets. This makes investigating historical events fast, and far more efficient. And it does not require deep expertise – which means even junior analysts can perform packet forensics tasks.

Some examples of tasks that are made easier with the latest Endace software release include:

  • Reconstructing malware file downloads or transfers so you can submit them to a sandbox or virus tool.
  • Understanding exactly what data left your network by reconstructing file exfiltration events.
  • Easily generating logs from recorded traffic to look for things like unusual DNS activity, port scans, DDoS events, or other threatening activity.

See how easy this is in the short 10 minute demonstration below (file extraction is at 08:15):

For more information on these great new features, or to arrange a demonstration to show how Endace could help you, contact us.


Multi-Tenancy introduced with OSm 7.1

Original Entry by : Cary Wright

Securely sharing packet capture infrastructure across multiple entities

By Cary Wright, VP Product Management, Endace


Cary Wright, VP Product Management, EndaceWe are proud to announce that EndaceProbe now supports Multi-Tenancy, “Woo-hoo” I hear you say! If you are an MSPP, MDR, Service Provider, or organisation with multiple departments, your SoC teams can now reap the benefits of having access to weeks or months of continuously recorded network traffic whilst sharing costs with many other likeminded SoC teams. Let’s dig into what Multi-Tenancy is and why it’s important.

At the most basic level, Multi-Tenancy is the ability to host multiple “entities” (e.g. multiple customers or multiple organizational divisions) on a single architecture at the same time. To put it another way, Multi-Tenancy offers a way to share the costs of a system or service across more than one entity. Multi-tenancy can mean different things depending on your domain of expertise:

  • Cloud providers are inherently multi-tenanted, serving millions of clients with shared compute
  • Operating systems often host multiple tenants on a single machine
  • Networks can supply connectivity to multiple teams or organizations via a single infrastructure.

All these scenarios have these necessary requirements in common:

  1. Each tenant’s data must remain private and accessible to only that authorized tenant, and
  2. Each tenant needs access to reliable, predictable, or contracted resources – such as bandwidth, compute, storage, security services, expertise, etc.

Multi-tenancy can help organizations to scale critical security services in a cost-efficient manner. A capable security architecture/service requires a significant capability investment and the expertise to operate it. By enabling this investment to be shared, it enables services to be made available to organizations that might otherwise not have been able to afford them.

A good example of where Multi-Tenancy can be extremely useful is the Security Operations Center (SoC). Typically, only large, well-funded organisations have the resources to build their own dedicated SoC. Multi-tenancy can enable multiple organizations to share a SoC, each benefiting from a strengthened security posture without carrying the full burden of the costs and effort involved.

This is the model underpinning outsourced MSSP services, for example. But it can also be an ideal model for larger organizations with multiple divisions that each need to maintain separation from each other. Or where multiple individual companies are owned by a common parent. It can also be a useful way to safely isolate a newly acquired company until its systems can be safely migrated or transferred over to the new owner’s infrastructure.

We see lots of areas where organizations are benefiting from this ability to  share infrastructure and services. So we are very pleased to announce that with the new OSm 7.1 software release, EndaceProbe Analytics Platform now also supports Multi-Tenancy for network recording.

This is especially useful where multiple tenants share the same network. A single EndaceProbe, or a fabric of EndaceProbes, can now be securely shared across multiple different organisations or tenants, while keeping the data for each tenant secure and private. EndaceProbes continuously record all network data on the shared network, but only provide each tenant with access to their own data.

In this case the tenancies are defined by VLANs, where each tenant has a VLAN, or set of VLANs, that carries only their traffic. When a user needs to investigate a security threat in their tenancy, they simply log into InvestigationManager to search, inspect, and analyse only the traffic that belongs to that tenancy. It’s as if each tenant has its own, wholly separate, EndaceFabric, dedicated just to its own tenancy.

This new capability is important for large organisations that service multiple departments, agencies, or divisions. Service providers, MSPPs, and MDRs which service multiple clients will also benefit from Multi-Tenancy to give each of its clients ready access to its own recorded network traffic for fast, secure, and private, security incident response.

We are very excited that this new Multi-Tenancy feature can help make Network Recording accessible for many more organizations, helping them to resolve incidents faster and with greater confidence.

For more information on this great new feature, or to arrange a demonstration to show how Endace could help you, contact us.


Endace Packet Forensics Files: Episode #33

Original Entry by : Michael Morris

Michael talks to NIST Fellow, Ron Ross

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

The dynamic nature and complexity of many organizations’ cyber infrastructure makes it hard enough to keep it running and performing, let alone to maintain the highest levels of security to protect your IP and data.  But do you know what the highest level of security standards are?

In this episode of the Endace Packet Forensic files I talk with NIST Fellow, Ron Ross, who shares how cyber security standards are evolving to keep pace with new threats and challenges. Ron highlights where he sees most organizations falling short and the highest priorities they should be addressing. He shares some insights into new standards and recommendations for protecting operational technologies which are becoming an attractive target for threat actors.

Finally, Ron talks about the need to move from a mindset of “prevention” to building “resiliency” into your security architecture to stay ahead of cyberthreats.

Other episodes in the Secure Networks video/audio podcast series are available here.


Triggered vs Continuous Capture

Original Entry by : Cary Wright

Can security teams afford to collect only a fraction of the evidence necessary to run down cyber attacks?

By Cary Wright, VP Product Management, Endace


Cary Wright, VP Product Management, EndaceSecurity teams rely on a variety of data sources for investigating security threats, including logs from network security, endpoint detection logs, logs from various servers and AAA devices in the network. But often these logs are not sufficient to determine the seriousness and extent of the threat – they may be missing information or may have been manipulated by an attacker. So many analysts turn to continuous Full Packet Capture (FPCAP) to fully analyse threat activity.

Continuous Full Packet Capture records every packet (regardless of rules) to disk in a large rotating buffer, ensuring that any and all threat traffic is recorded for later analysis. FPCAP will reveal all the various stages of an attack, including the initial exploit, command and control communications, surveillance and execute phases. FPCAP is very trustworthy because it cannot be manipulated by an attacker. And because all packets are being recorded we can be sure we have a record of all threat activity, whether that threat is detectable by security monitoring tools or not. Historically, it has been expensive to deploy enough FPCAP storage to get the recommended 30 days of look-back needed by security analysts. This is where triggered capture enters the story.

What is Triggered Packet Capture?

The concept of triggered packet capture is to capture only those packets related to detected or likely threats and discard the rest. The goal is to reduce the storage capacity needed to provide the necessary look-back period and thereby minimize the cost of storage.

One approach is to create rules or triggers to capture specific sequences or flows of packets and ignore anything that doesn’t match these rules. Recorded packets are stored in a unique packet capture (PCAP) file on a local file system or RAID array and at the same time an event notification may be logged to a SIEM or other security monitoring tools to ensure that PCAP can be found later.

Alternatively, the trigger rule might be a firewall signature or rule that causes a handful of packets to be recorded for that event to show what triggered the rule to fire. This is often an option provided by NGFWs but typically only very limited storage capacity is provided.

A second approach is to only capture those packets that your security monitoring tools don’t recognise as “normal” on the basis that this anomalous traffic could potentially indicate a threat or attack.

The Problems with Triggered Packet Capture

There are serious flaws with both the approaches above that will leave security teams lacking the PCAP evidence they need to properly investigate events.

Firstly, it’s really not possible to predict ahead of time what attacks you might be subject to. What about attacks that leverage the next Heartbleed SolarWinds or Log4J 2 type of vulnerability. How can you write triggers or rules that will ensure you reliably capture all the relevant packets related to future unknown threat vectors and vulnerabilities?  More than likely you will miss these attacks.

Secondly, capturing the packets relating to an initial trigger event such as an intrusion is useful, but it only tells part of the story. An initial exploit is just the beginning of an attack. You also need to capture downstream activity such as command and control communications, lateral movement, surveillance, exfiltration and other malicious activity. These will often be unique to each victim once the initial exploit has been successfully executed. And to investigate these conclusively you need access to full packet data. You can’t reconstruct exfiltrated data, or malware drops from metadata.

Lastly, attackers often try to camouflage their activity by using common protocols and traffic patterns to make their malicious activity look like normal user behavior. Capturing only “unknown” or “anomalous” traffic will most likely miss this activity.

These problems with triggered capture become very costly because your team have only part of the story, they must start guessing what might have happened and they must assume the worst case when reporting to customers, markets or affected parties. The lack of concrete evidence becomes very costly, just to save a few dollars on storage.

Only Full Packet Data provides a Complete Picture

During the incident response process, once an exploit has been detected analysts then need to look at any subsequent activity from the affected host or hosts – such as payload downloads, command-and-control (C2) communications, lateral movement to other hosts on the network, etc.

Only by examining full packet data can we really start to understand the full impact of a detected threat. If we only have triggered capture available and we don’t have any packets captured relating to activity after the initial exploit was detected, how can we tell whether there was a backdoor installed? Or tell what data was exfiltrated so we can understand what our legal obligations are to notify affected parties. These sort of questions cannot remain unanswered.

Advances in storage media, capture techniques and compression technology have made continuous full PCAP affordable now. So there’s no longer a need to compromise by deploying unreliable Triggered Capture to save a few bucks on disk storage. It’s simpler, faster, and much more robust to capture every packet so we are prepared to investigate any threat, new or old. Continuous full PCAP ensures teams can respond to incidents quickly and confidently, and in the long run this significantly reduces the cost of cybersecurity.


Endace Packet Forensics Files: Episode #32

Original Entry by : Michael Morris

Michael talks to Merritt Baer, Principal in the Office of the CISO at AWS

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

Is your organization trying to implement enterprise level security at scale and you’re not sure where to focus?

In this episode of the Endace Packet Forensic files I talk with Merritt Baer, Principal in the Office of the CISO at AWS, who shares her experience in how to design and build robust, dynamic security at scale. Merritt discusses what security at scale looks like, some of the things that are often missed, and how to protect rapidly evolving hybrid cloud infrastructures.  She highlights some common pitfalls that organizations run into as they shift workloads to cloud providers and how to pivot your SOC teams and tools to ensure you have robust security forensics in place.

Finally, Merritt examines how adopting SOAR platforms can help, and things you can do to prevent gaps and breakdowns in your security posture.

Other episodes in the Secure Networks video/audio podcast series are available here.