Endace Packet Forensics Files: Episode #50

Original Entry by : Michael Morris

In our 50th Episode, Michael talks to Martyn Crew, Senior Director, Solutions Marketing and Partner Technologies at Gigamon

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

It’s my pleasure to welcome Martyn Crew from Gigamon for this 50th Episode of the Packet Forensics Files. It’s a great milestone to have reached, and the series continues to grow in popularity – thanks to people like Martyn who have joined me to share their valuable expertise and advice.

In this episode Martyn, a 30-year veteran in the cyber security and network management space shares his expertise on the limitations and risks associated with exclusively using log and meta-data as the primary resources for your security team’s investigations. He discusses various use cases where network traffic and full packet data can play a crucial role in security investigations, highlighting the potential oversights that could occur when you rely solely on log data.

We talk about how to address the scalability challenges of leveraging full-packet data and delve into the storage and retention obstacles that many organizations fear when looking at solution options.

Finally, Martyn suggests how to balance the telemetry sources and costs for your SOC team, and shares some key considerations for maintaining visibility in your hybrid cloud infrastructure encompassing both on-prem and public or private cloud environments.

.

Other episodes in the Secure Networks video/audio podcast series are available here. Or listen to the podcast here or on your favorite podcast platform.


Endace Packet Forensics Files: Episode #49

Original Entry by : Michael Morris

Michael talks to ICS and SCADA security expert, Lionel Jacobs

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

In this episode, Michael talks to Lionel Jacobs, Senior Partner Engineer, ICS and SCADA security expert, at Palo Alto Networks. Lionel draws on his more than 25 years of experience in OT (Operational Technology) and almost a decade at Palo Alto Networks in discussing some of the challenges of securing OT, IoT and critical infrastructure from cyberattack.

Lionel talks about some of the unique challenges that OT systems present for security teams and why being prepared to defend against attacks on critical infrastructure is so crucial.

Nation-state actors obviously see critical infrastructure as a prime target for attacks. But so too do criminal actors who see critical infrastructure operators as potentially more vulnerable to extortion than other targets.

Lionel discusses the role of Zero Trust and limited access zoning in reducing the risk of attackers expanding their ability to move from OT environments into the enterprise network. Carefully mapping the network and assets and understanding the requirements for access between different areas of the infrastructure is key to this. Often legacy OT devices and control systems can’t be easily patched so placing these elements into a security zone with a remediating factor between that zone and other parts of the network is the only feasible way to protect them from attack.

Lionel talks about the challenge of detecting attacks in OT environments, how to spot unusual activity, and the importance of having a reference baseline to compare against. He highlights the importance of packet data in providing insight into what is happening on OT networks.

Lionel also stresses the importance of close collaboration between OT security teams and the operators of OT networks. It’s crucial to ensure that the safe and effective operation of critical infrastructure isn’t adversely impacted by security teams that don’t understand the operational processes and procedures that are designed to ensure the safety of the plant and the people that work there.

Lastly, Lionel reiterates the importance of gathering reliable evidence, and enabling security analysts to quickly get to the evidence that’s pertinent to their investigation. It’s not just about collecting data, but about making sure that data is relevant and easy to access.

Other episodes in the Secure Networks video/audio podcast series are available here. Or listen to the podcast here or on your favorite podcast platform.


Combining Endace and Elastic delivers detailed visibility into real-time and historical network activity

Original Entry by : Cary Wright

By Cary Wright, VP Product Management, Endace


Cary Wright, VP Product Management, Endace

We’re pleased to announce our newest technical partnership with leading SIEM and observability platform provider, Elastic. By combining together EndaceProbe™ always-on Hybrid Cloud packet capture, Elastic™ Stack and Elastic™ Security, we’re providing the packet-level network visibility and detailed network metadata that Security and IT teams need when responding to security threats and network or application performance issues.

How Do We Work Together?

By combining Endace and Elastic Stack, organizations gain accurate, highly detailed visibility into both real-time and historical network activity. Security and IT analysts can search network metadata in Elastic, and quickly pivot to full packet data for forensic investigations when they need to. The result is faster, more accurate incident investigation and resolution. The combination of Elastic Stack and EndaceProbe gives cybersecurity and IT teams the ability to see exactly what’s happening on their network in real-time. EndaceProbes can record weeks or months of full packet capture across hybrid cloud networks to provide a complete and accurate record of all network activity. The detailed full packet capture data recorded by EndaceProbes is a perfect complement to the rich logs and metadata collected by Elastic Stack. When analysts need to go back-in-time to investigate any incident they have a complete record of that activity at their fingertips. Beyond this, the ability to pivot from anomalies or security alerts directly to forensic examination of packet-level data lets analysts see exactly what’s happening. They can quickly respond to incidents and dramatically mitigate threat risk to their organizations.

EndaceFlow and Elastic Stack

In addition, EndaceProbe appliances can host EndaceFlow™, which generates extremely high-fidelity NetFlow data at full line rate. This NetFlow data can be ingested by Elastic Stack to provide detailed metadata for monitoring the security and performance of the network and interrogating network activity. Pre-built integration between EndaceProbes and Elastic Stack enables streamlined investigation workflows. Analysts can click on alerts in the Elastic UI to go directly to the related full packet data recorded by EndaceProbe. Analysts can quickly view traffic right down to individual packet level to see precisely what occurred before, during and after any event, with absolute certainty.

For more information about our Fusion Partner integrations, please visit www.endace.com/fusion-partners.

To see a demonstration of this Elastic Security integration in action please visit the Elastic partner page at https://www.endace.com/elastic-security.


Endace Packet Forensics Files: Episode #48

Original Entry by : Michael Morris

Michael talks to Endace’s IT Security Manager, Al Edgar.

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, EndaceIn this Episode of Packet Forensics Files, I ask Al Edgar, former Information Security Manager for Health Alliance – and now IT Security Manager at Endace – about some of the important areas a security leader needs to focus on and what new challenges they are facing.

Firstly, Al says, it’s important to take an holistic approach to cybersecurity, by looking at the three critical components for robust security: people, processes, and technology. He stresses the importance of Incident Response planning and why it’s so critical to define clear objectives, roles, and responsibilities as part of the plan.

In order to stay ahead of emerging threats, Al says keeping up-to-date with cybersecurity trends is crucial. He recommends subscribing to cyber blogs, leveraging threat intelligence feeds, and mapping threat intelligence against your organizational infrastructure. He also highlights the importance of having a plan for managing third-party vendor risk.

Al provides some valuable recommendations on where to start to ensure a more robust security posture, including maintaining a centralized inventory, conducting thorough risk assessments, cataloging and categorizing risks, and incorporating appropriate security clauses into contracts with suppliers and partners.

Cybersecurity awareness training is another critical area, Al says. His view is that it’s the responsibility of every individual in an organization to prioritize cybersecurity but he highlights the importance of support and training to enable them do this effectively.

Lastly, Al talks about future cybersecurity threats, and calls out the potential risks associated with the weaponization of AI technology. He highlights the need for caution when sharing information with AI systems, reminding us to be mindful of potential privacy breaches and the risk that sensitive IP or data disclosed to AI tools may be misused or insufficiently protected.

Other episodes in the Secure Networks video/audio podcast series are available here. Or listen to the podcast here or on your favorite podcast platform.


Endace Packet Forensics Files: Episode #47

Original Entry by : Michael Morris

Michael talks to network forensics and incident response specialist, Jasper Bongertz.

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

What are some of the challenges of responding to a serious incident – such as a ransomware attack or advanced persistent attack? Where do you start, and what are the critical things you need to do?

In this episode we are lucky to welcome Jasper Bongertz, Head of Digital Forensics and Incident Response at G DATA Advanced Analytics in Germany. Jasper has a wealth of experience from working in the front line of incident response at G DATA as well as in his previous role at Airbus. He also has a long background in network forensics – having been a Wireshark and network forensics instructor, and continues to be a very active member of the Wireshark community.

Jasper starts by outlining some of the steps to mitigate “headless chicken mode” which is what he often sees when organization first encounters a serious incident.

The process starts with understanding exactly what has happened, and what the impact is so that a clear response plan and timeline for resolution can be established. This requires gathering the available evidence – including network packet data if it’s available. It’s important to be able to do this quickly – particularly in the case of ransomware attacks where the organization’s IT systems may be unavailable as a result of the attack. With ransomware, speed is crucial since the organization’s primary priority is typically to get back to an emergency operating state as quickly as possible. Jasper lists some of the tools that his team finds useful in rapidly gathering that critical evidence.

Once the scope of the incident has been established, you need to have the specific expertise on hand to investigate and understand what happened and how it happened so you can identify the right response. Typically, Jasper says, that will involve having at least an incident response specialist, a forensic expert, and a malware reverse engineer, but depending on the scale of the event may involve many others too.

Jasper outlines the most important steps organizations can take to protect themselves against ransomware attacks and ensure that in the event of a successful attack they can recover. The two most important of these being to make sure domain administrator credentials are protected to prevent privilege escalation and ensuring your backups are complete and protected from sabotage.

Lastly, Jasper discusses the changing cyberthreat landscape. He outlines why he thinks data exfiltration and extortion will become more common than ransomware and encryption, and why network data is critical to combat this growing threat.

Other episodes in the Secure Networks video/audio podcast series are available here. Or listen to the podcast here or on your favorite podcast platform.


Introducing EndaceProbe Cloud

Original Entry by : Cary Wright

Scalable Packet Capture for Hybrid Cloud

By Cary Wright, VP Product Management, Endace


Cary Wright, VP Product Management, Endace

The rapid growth of cloud vulnerabilities, hijacked cloud credentials, APTs targeting cloud, and lack of network layer visibility in cloud has made one thing clear: recorded network packet data is just as essential in the cloud as it is in physical networks. 

Enterprises know the value of our packet capture solutions, and they have told us they need the power of packets in the cloud as well. In many cases, they have moved – or plan to move – workloads to the cloud but have been hampered by an inability to gain the same visibility into activity in their public cloud infrastructure as they are used to relying on in on-premise environments.

Leveraging our 20-plus years of experience in delivering accurate, reliable packet capture for some of the world’s largest organizations, Endace developed EndaceProbe Cloud as the first truly scalable, enterprise-class solution for providing always-on packet capture in public cloud environments.

Unlike many solutions on the market, we’ve done it in a way that scales easily and delivers truly unified visibility that lets security, network and IT teams analyze packet data from across hybrid cloud and multi-cloud environments quickly and easily from a central console. 

EndaceProbe Cloud delivers packet-level visibility for public cloud that is critical for threat hunting, incident response and performance management in those environments. It operates seamlessly with EndaceProbe hardware appliances to deliver always-on packet capture across on-premise, private and public cloud infrastructure, to provide unified visibility across the entire network.

See it in Action

The demo below shows how easy it is to quickly search for packet data across a multi-cloud – AWS and Azure – environment, recreate files from packet data and drill-in to analyze the full packets. All from a single console.

EndaceProbe Cloud is a full-featured EndaceProbe, purpose-built for deployment in AWS and Microsoft Azure environments that provides the following benefits to customers in cloud and hybrid cloud environments:  

    • Continuous, zero-loss, packet capture in public and hybrid cloud environments that provides weeks or months of visibility 
    • A unified console for fast global search and analysis across on-premise, private and public cloud environments.  
    • Full visibility into North-South and East-West traffic 
    • Secure packet storage within the customers’ own virtual network or virtual private cloud (VPC). 
    • Powerful traffic analysis and investigation tools including file extraction, log generation, and hosted Wireshark™ 
    • Seamless workflow integration with an open API and strong ecosystem of third-party network and security tools (https://www.endace.com/fusion-partners) 
    • Subscription-based pricing that offers flexibility and scalability  

EndaceProbe Cloud complements Endace’s hardware appliances to provide unified and seamless visibility across the entire network.

 

 

Endace Packet Forensics Files: Episode #46

Original Entry by : Michael Morris

Michael talks to Gerald Combs, Wireshark Founder, and Stephen Donnelly, Endace CTO

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

How did Wireshark come to be, and what’s made it so successful – not just as the pre-eminent tool for analyzing network packet data, but as an open-source project in general?

In this episode I talk to Wireshark founder, Gerald Combs, and Endace CTO, Stephen Donnelly, about the origins of Wireshark, and why packet capture data is so crucial for investigating and resolving network security threats and network or application performance issues.

Gerald talks about the early days of Ethereal, a “packet sniffer” he originally created for his own use in his role at an ISP, but subsequently open-sourced as Wireshark. That fortuitous decision was key, Gerald says, to the subsequent ongoing growth and success of the Wireshark project – which will turn 25 years old in July! It enabled developers from around the world to contribute to the project, creating a Windows version in the process, and helping Wireshark to become the gold standard tool for network analysis, used by SecOps, NetOps and IT teams the world over.

Stephen has been using Wireshark right from the earliest days – when it was still called Ethereal – and is one of the many contributors to the project.Stephen and Gerald both talk about why packet analysis is so important for cybersecurity and network performance analysis (the ubiquitous “Packets Don’t Lie” T-shirt – available from the Wireshark Foundation store – says it all really), and discuss examples of the many and varied problems that Wireshark is helping people to solve.

Stephen outlines the differences between network flow data and packet capture data and why packet data is essential for solving some problems where flow data just doesn’t contain the level of detail required.

Wireshark is continually evolving, with support for new protocols, and new UI enhancements that make it easier for analysts to slice-and-dice packet data. Gerald says that Wireshark is almost the perfect open-source project because it allows for a lot of parallel collaboration from contributors in creating new dissectors and ensuring that Wireshark continues to keep pace with the rapid pace of change in networking. Now that planning for Wireshark 5.x has started Gerald also looks ahead to some of the possible new features that might appear in future releases.

And finally, Gerald talks about the new Wireshark Foundation (which Endace is a sponsor of) which has been setup to provide support for ongoing development of the Wireshark project and ensure it continues its resounding success into the future.

Wireshark is coming up on its 25th birthday and still going from strength-to-strength. Don’t miss this fascinating interview with the leader of one of the most successful open-source projects around. Gerald and Stephen’s insightful commentary as well some fantastic tips-and-tricks make this a must-watch episode.

Other episodes in the Secure Networks video/audio podcast series are available here. Or listen to the podcast here or on your favorite podcast platform.


Endace Packet Forensics Files: Episode #45

Original Entry by : Michael Morris

Michael talks to Dimitri McKay, Principal Security Strategist and CISO Advisor at Splunk

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

Increasingly complex systems, expanding threat landscape, and explosion in the number of potential entry points all make managing security at scale a daunting prospect. So what can you do to implement effective security at scale and what are some of the pitfalls to avoid?

In this episode I talk with Dimitri McKay, Principal Security Strategist and CISO Advisor at Splunk, about where to start addressing the challenges of security at scale. He highlights the importance of robust risk assessment, developing clear security goals and ensuring leadership buy-in to the organization’s security strategy. And the importance of balancing the needs of users with the need to secure the enterprise.

Dimitri discusses some of the pitfalls that organizations often fall into, and what security leaders can do – and where they should start – to avoid making the same mistakes. He talks about the importance of thinking strategically not just tactically, of being proactive rather than just reactive, and of creating a roadmap for where the organization’s security needs to be in a year, two years, three years into the future.

Dimitri also highlights the need to collect the right data to ensure the organization can accomplish the security goals it has set, to enable high-fidelity threat detection and provide the necessary context for effective, and efficient, threat response. Security teams started by collecting what they had he says – firewall logs, authentication logs etc. – but this isn’t necessarily sufficient to enable them to accomplish their objectives because it focuses more on IT risks, rather than on the critical business risks.

Finally, Dimitri puts on his futurist hat to predict what security teams should be on the look out for. Not surprisingly, he predicts the rapid development of AI tools like ChatGPT and OpenAI has huge potential benefits for cyber defenders. But these tools will also enable cyber attackers to create increasingly sophisticated threats and circumvent defences. AI is both an opportunity and a threat.

Other episodes in the Secure Networks video/audio podcast series are available here. Or listen to the podcast here or on your favorite podcast platform.