Endace Packet Forensics Files: Episode #47

Original Entry by : Michael Morris

Michael talks to network forensics and incident response specialist, Jasper Bongertz.

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

What are some of the challenges of responding to a serious incident – such as a ransomware attack or advanced persistent attack? Where do you start, and what are the critical things you need to do?

In this episode we are lucky to welcome Jasper Bongertz, Head of Digital Forensics and Incident Response at G DATA Advanced Analytics in Germany. Jasper has a wealth of experience from working in the front line of incident response at G DATA as well as in his previous role at Airbus. He also has a long background in network forensics – having been a Wireshark and network forensics instructor, and continues to be a very active member of the Wireshark community.

Jasper starts by outlining some of the steps to mitigate “headless chicken mode” which is what he often sees when organization first encounters a serious incident.

The process starts with understanding exactly what has happened, and what the impact is so that a clear response plan and timeline for resolution can be established. This requires gathering the available evidence – including network packet data if it’s available. It’s important to be able to do this quickly – particularly in the case of ransomware attacks where the organization’s IT systems may be unavailable as a result of the attack. With ransomware, speed is crucial since the organization’s primary priority is typically to get back to an emergency operating state as quickly as possible. Jasper lists some of the tools that his team finds useful in rapidly gathering that critical evidence.

Once the scope of the incident has been established, you need to have the specific expertise on hand to investigate and understand what happened and how it happened so you can identify the right response. Typically, Jasper says, that will involve having at least an incident response specialist, a forensic expert, and a malware reverse engineer, but depending on the scale of the event may involve many others too.

Jasper outlines the most important steps organizations can take to protect themselves against ransomware attacks and ensure that in the event of a successful attack they can recover. The two most important of these being to make sure domain administrator credentials are protected to prevent privilege escalation and ensuring your backups are complete and protected from sabotage.

Lastly, Jasper discusses the changing cyberthreat landscape. He outlines why he thinks data exfiltration and extortion will become more common than ransomware and encryption, and why network data is critical to combat this growing threat.

Other episodes in the Secure Networks video/audio podcast series are available here. Or listen to the podcast here or on your favorite podcast platform.


Introducing EndaceProbe Cloud

Original Entry by : Cary Wright

Scalable Packet Capture for Hybrid Cloud

By Cary Wright, VP Product Management, Endace


Cary Wright, VP Product Management, Endace

The rapid growth of cloud vulnerabilities, hijacked cloud credentials, APTs targeting cloud, and lack of network layer visibility in cloud has made one thing clear: recorded network packet data is just as essential in the cloud as it is in physical networks. 

Enterprises know the value of our packet capture solutions, and they have told us they need the power of packets in the cloud as well. In many cases, they have moved – or plan to move – workloads to the cloud but have been hampered by an inability to gain the same visibility into activity in their public cloud infrastructure as they are used to relying on in on-premise environments.

Leveraging our 20-plus years of experience in delivering accurate, reliable packet capture for some of the world’s largest organizations, Endace developed EndaceProbe Cloud as the first truly scalable, enterprise-class solution for providing always-on packet capture in public cloud environments.

Unlike many solutions on the market, we’ve done it in a way that scales easily and delivers truly unified visibility that lets security, network and IT teams analyze packet data from across hybrid cloud and multi-cloud environments quickly and easily from a central console. 

EndaceProbe Cloud delivers packet-level visibility for public cloud that is critical for threat hunting, incident response and performance management in those environments. It operates seamlessly with EndaceProbe hardware appliances to deliver always-on packet capture across on-premise, private and public cloud infrastructure, to provide unified visibility across the entire network.

See it in Action

The demo below shows how easy it is to quickly search for packet data across a multi-cloud – AWS and Azure – environment, recreate files from packet data and drill-in to analyze the full packets. All from a single console.

EndaceProbe Cloud is a full-featured EndaceProbe, purpose-built for deployment in AWS and Microsoft Azure environments that provides the following benefits to customers in cloud and hybrid cloud environments:  

    • Continuous, zero-loss, packet capture in public and hybrid cloud environments that provides weeks or months of visibility 
    • A unified console for fast global search and analysis across on-premise, private and public cloud environments.  
    • Full visibility into North-South and East-West traffic 
    • Secure packet storage within the customers’ own virtual network or virtual private cloud (VPC). 
    • Powerful traffic analysis and investigation tools including file extraction, log generation, and hosted Wireshark™ 
    • Seamless workflow integration with an open API and strong ecosystem of third-party network and security tools (https://www.endace.com/fusion-partners) 
    • Subscription-based pricing that offers flexibility and scalability  

EndaceProbe Cloud complements Endace’s hardware appliances to provide unified and seamless visibility across the entire network.

 

 

Endace Packet Forensics Files: Episode #46

Original Entry by : Michael Morris

Michael talks to Gerald Combs, Wireshark Founder, and Stephen Donnelly, Endace CTO

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

How did Wireshark come to be, and what’s made it so successful – not just as the pre-eminent tool for analyzing network packet data, but as an open-source project in general?

In this episode I talk to Wireshark founder, Gerald Combs, and Endace CTO, Stephen Donnelly, about the origins of Wireshark, and why packet capture data is so crucial for investigating and resolving network security threats and network or application performance issues.

Gerald talks about the early days of Ethereal, a “packet sniffer” he originally created for his own use in his role at an ISP, but subsequently open-sourced as Wireshark. That fortuitous decision was key, Gerald says, to the subsequent ongoing growth and success of the Wireshark project – which will turn 25 years old in July! It enabled developers from around the world to contribute to the project, creating a Windows version in the process, and helping Wireshark to become the gold standard tool for network analysis, used by SecOps, NetOps and IT teams the world over.

Stephen has been using Wireshark right from the earliest days – when it was still called Ethereal – and is one of the many contributors to the project.Stephen and Gerald both talk about why packet analysis is so important for cybersecurity and network performance analysis (the ubiquitous “Packets Don’t Lie” T-shirt – available from the Wireshark Foundation store – says it all really), and discuss examples of the many and varied problems that Wireshark is helping people to solve.

Stephen outlines the differences between network flow data and packet capture data and why packet data is essential for solving some problems where flow data just doesn’t contain the level of detail required.

Wireshark is continually evolving, with support for new protocols, and new UI enhancements that make it easier for analysts to slice-and-dice packet data. Gerald says that Wireshark is almost the perfect open-source project because it allows for a lot of parallel collaboration from contributors in creating new dissectors and ensuring that Wireshark continues to keep pace with the rapid pace of change in networking. Now that planning for Wireshark 5.x has started Gerald also looks ahead to some of the possible new features that might appear in future releases.

And finally, Gerald talks about the new Wireshark Foundation (which Endace is a sponsor of) which has been setup to provide support for ongoing development of the Wireshark project and ensure it continues its resounding success into the future.

Wireshark is coming up on its 25th birthday and still going from strength-to-strength. Don’t miss this fascinating interview with the leader of one of the most successful open-source projects around. Gerald and Stephen’s insightful commentary as well some fantastic tips-and-tricks make this a must-watch episode.

Other episodes in the Secure Networks video/audio podcast series are available here. Or listen to the podcast here or on your favorite podcast platform.


Endace Packet Forensics Files: Episode #45

Original Entry by : Michael Morris

Michael talks to Dimitri McKay, Principal Security Strategist and CISO Advisor at Splunk

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

Increasingly complex systems, expanding threat landscape, and explosion in the number of potential entry points all make managing security at scale a daunting prospect. So what can you do to implement effective security at scale and what are some of the pitfalls to avoid?

In this episode I talk with Dimitri McKay, Principal Security Strategist and CISO Advisor at Splunk, about where to start addressing the challenges of security at scale. He highlights the importance of robust risk assessment, developing clear security goals and ensuring leadership buy-in to the organization’s security strategy. And the importance of balancing the needs of users with the need to secure the enterprise.

Dimitri discusses some of the pitfalls that organizations often fall into, and what security leaders can do – and where they should start – to avoid making the same mistakes. He talks about the importance of thinking strategically not just tactically, of being proactive rather than just reactive, and of creating a roadmap for where the organization’s security needs to be in a year, two years, three years into the future.

Dimitri also highlights the need to collect the right data to ensure the organization can accomplish the security goals it has set, to enable high-fidelity threat detection and provide the necessary context for effective, and efficient, threat response. Security teams started by collecting what they had he says – firewall logs, authentication logs etc. – but this isn’t necessarily sufficient to enable them to accomplish their objectives because it focuses more on IT risks, rather than on the critical business risks.

Finally, Dimitri puts on his futurist hat to predict what security teams should be on the look out for. Not surprisingly, he predicts the rapid development of AI tools like ChatGPT and OpenAI has huge potential benefits for cyber defenders. But these tools will also enable cyber attackers to create increasingly sophisticated threats and circumvent defences. AI is both an opportunity and a threat.

Other episodes in the Secure Networks video/audio podcast series are available here. Or listen to the podcast here or on your favorite podcast platform.


Endace Packet Forensics Files: Episode #44

Original Entry by : Michael Morris

Michael talks to David Monahan, Business Information Security Officer and former security researcher.

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

Cyberthreats are something all organizations are facing. But Pharmaceutical and Healthcare Providers have some unique challenges and vulnerabilities and come in for more than their fair share of attention from threat actors. What can your SOC team learn from some of the best practices these organizations are implementing? Are you architecting your environment to separate IOT devices from other critical assets and are you managing them with the same level of scrutiny?

In this episode I talk with David Monahan, a 30-year expert in cybersecurity and network management and former researcher at Enterprise Management Associates. David draws on his research background as well as his current experience working as the Business Information Security Officer at a large global pharmaceutical company.

He talks about some of the similarities and differences the Healthcare and Pharmaceutical industries have with other industries. He shares his insights into why the Healthcare and Pharmaceutical industries are so strongly targeted by threat actors and things consumers or patients can do to help protect themselves and their information.

David also discusses some of the unique challenges Healthcare organizations have around IOT devices and suggests ways to help manage these risks.  He shares some best practices your security organization can be leveraging and points out tools and solutions that are critical for any security stack.

Finally, David talks about what training and skills are important to ensure your SOC analysts are as prepared as possible to defend against cyberthreats.

Other episodes in the Secure Networks video/audio podcast series are available here. Or listen to the podcast here or on your favorite podcast platform.


Endace Packet Forensics Files: Episode #43

Original Entry by : Michael Morris

Michael talks to Jim Mandelbaum, Field CTO at Gigamon

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

As workloads move to the cloud, and infrastructure becomes increasingly complex, how can you ensure that your security posture evolves accordingly? It’s essential to ensure visibility across the entire network if you are to secure it effectively.

In this episode of the Endace Packet Forensic files, I talk with Jim Mandelbaum, Field CTO at Gigamon, about what “security at scale” means. Jim draws on more than a decade of experience as a CTO in the security industry, and shares best-practise tips to ensure that as your infrastructure evolves, your security posture keeps pace.

Jim highlights the importance of leveraging automation to help deal with the increasingly complex network environment. Key to this is having visibility into exactly what’s happening on your network – including on-prem, cloud and hybrid-cloud environments – so you can make informed decisions about what traffic needs to be monitored and recorded. And what tasks can be automated to ensure threat visibility.

It’s also critical to break down team silos, Jim says. Otherwise, responsibility has a tendency to fall through the cracks. Teams need to collaborate closely, and include the security team on IT strategy planning and particularly cloud migration projects. That makes it easier to determine who is responsible for what parts of security from the get-go. When teams have the opportunity to discuss the challenges they face they can often leverage solutions that have been successfully implemented elsewhere in the organization – saving time, resources and budget as a result.

Lastly, Jim highlights the importance of talking with your vendors about their future product strategies to ensure they align with your organization’s plans. Otherwise, there’s a risk of divergence which could prove very costly down the track.

Other episodes in the Secure Networks video/audio podcast series are available here. Or listen to the podcast here or on your favorite podcast platform.


Endace Packet Forensics Files: Episode #38

Original Entry by : Michael Morris

Michael talks to Hakan Holmgren, EVP of Sales, Cubro

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

As data growth accelerates and distributed workloads increase, enterprises are prioritising cost efficiency and space minimization in modern datacenters. They are looking to leverage new technologies and use smaller, more cost-efficient appliances to reduce cost and improve efficiency.

By architecting infrastructure to prioritize stability and robustness and focusing on reducing carbon footprint, organizations can dramatically reduce power, storage and cooling requirements while also improving efficiency. A win-win outcome.

In this podcast, Hakan Holmgren, EVP Sales at Cubro, talks about how new technologies like Intel barefoot ASICs can accelerate packet processing for cloud datacenters and edge deployments and enable consolidation of infrastructure to reduce cost and minimize environmental impact.

Other episodes in the Secure Networks video/audio podcast series are available here.


Endace Packet Forensics Files: Episode #36

Original Entry by : Michael Morris

Michael talks to Neil Wilkins, Technical Director EMEA, Garland Technology

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

What does it mean to have security at scale?  For large infrastructures with rapid data growth have you maintained or improved your security posture as you have scaled?

In this episode of the Endace Packet Forensic files I talk with Neil Wilkins, Technical Director for EMEA at Garland Technology, who outlines some of the challenges he sees organizations facing when it comes to maintaining security at scale.  He shares some recommendations and best practices to get on the right path to improve security in large environments.

Finally, Neil shares his thoughts on Security Orchestration and Automation Response (SOAR) platforms and how they can help in environments with lots of tools and events and multiple teams trying to manage the cyber security infrastructure. He provides suggestions for rolling out SOAR solutions and highlights some things to avoid to ensure the platform delivers the returns and efficiencies hoped for.

Having a large, dynamic infrastructure doesn’t mean you can’t keep your arms around your security posture, but you need to have processes and tools in place that can scale as you grow and accelerate incident response to keep ahead of growing threat volumes.

Other episodes in the Secure Networks video/audio podcast series are available here.