Endace Packet Forensics Files: Episode #8

Original Entry by : Michael Morris

Michael talks to Scott Register, VP of Security Solutions for KeySight Technologies

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

Catch our latest episode of “Secure Networks – the Packet Forensic Files” vidcast/podcast series with this week’s special guest Scott Register, VP of Security Solutions for KeySight Technologies.

Scott, with his years of experience in building security solutions, shares some of the biggest challenges SecOps teams are facing in today’s environment and what they are doing to solve them.

He talks about the latest trends in the threat landscape and what security teams are doing to test and monitor for these attacks.  Hear how threat simulation can help both validate tool readiness and people processes to elevate your security prevention and response.

Finally, Scott shares his insights into implementing security in 5G and WiFi infrastructures as well as traditional networks and data centers.

Other episodes in the Secure Networks video/audio podcast series are available here.


Endace Packet Forensics Files: Episode #7

Original Entry by : Michael Morris

Michael talks to Travis Rosiek, CTO and Strategy Office at BluVector (a Comcast company)

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

If you haven’t caught up with the insights from our “Secure Networks – the Packet Forensics Files” vidcast/podcast series yet, here is your chance to see what you have been missing out on. This week’s special guest is Travis Rosiek, CTO and Strategy Officer for BluVector (a Comcast company).

Travis, a long-time government cybersecurity specialist, shares his insights into what he sees companies and government agencies are missing from their security strategies.  He talks about how you can begin to move your security activity from being merely reactive to a more proactive approach.

Travis discusses some of the specific challenges and advantages government agencies face compared to enterprises and what both groups can do to elevate their security posture.  He also shares his insights into best practices to protect your IT infrastructure and things to look out for in the ever-changing security landscape.

Other episodes in the Secure Networks video/audio podcast series are available here.


Endace Packet Forensics Files: Episode #6

Original Entry by : Michael Morris

Michael talks to Betty Dubois, Founder and CEO of Packet Detectives

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

Don’t miss the latest episode of our Endace Packet Forensic Files Vidcast/Podcast series with this week’s special guest Betty Dubois, CEO /Founder of Packet Detectives and renowned Sharkfest Speaker.

Betty talks about the challenges NetOps and SecOps teams are facing in today’s IT environment. She highlights best practices teams are adopting to adjust to today’s environments and shares her recommendations about how NetOps and SecOps teams can elevate their network investigation skills and processes.

Betty also gives some great tips on how to become a packet capture and Wireshark “power-user” and addresses some of the misconceptions about PCAP data.

Other episodes in the Secure Networks video/audio podcast series are available here.


Endace Packet Forensics Files: Episode #5

Original Entry by : Michael Morris

Michael talks to Gerard Martir, Network Solutions Team Specialist at Keysight Technologies

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

Tune in for the latest episode of our Endace Packet Forensic Files Vidcast/Podcast series with this week’s special guest Gerard Martir, Network Solutions Team Specialist for KeySight Technologies.

Gerard’s years of experience in the telecom space give him great insight as to how carriers are addressing cybersecurity along with how the roll out 5G will deliver better performance and tighter security.

Gerard talks about some of the adjustments telecom providers are making in the era of the global pandemic and the changing priorities cause by massive shifts to remote workforces across the globe. He also provides insight into some of the technology best practices carriers are implementing to ensure performance, resiliency and security across their cutting-edge networks.

Other episodes in the Secure Networks video/audio podcast series are available here.


Wireshark without the wait!

Original Entry by : Cary Wright

With Wireshark on EndaceProbe you can quickly search hundreds of Terabytes of packet data to analyze important packets in Wireshark

By Cary Wright, VP Product Management, Endace


Cary Wright, VP Product Management, Endace

Who can afford to wait when responding to a critical security incident? With Wireshark now hosted on EndaceProbe we have eliminated all the waiting around to see packet evidence. Reviewing captured network history will often reveal vital evidence needed to remediate a threat, evidence that may have been wiped from system logs.

Unfortunately, if you’re using Wireshark on your desktop to view that evidence you know it can be a very slow process. Just downloading a multi-GB capture file from your capture appliance can take a while, and then loading it up on your desktop can also be a lengthy process.  All this waiting and context switching is a productivity hit for you and your team– not to mention a data privacy risk if those PCAPs are sitting on your desktop or laptop.

I’m excited to say there will be no more waiting around to view packets with our newly released OSm 7.0 software! A full instance of native Wireshark is now hosted right on each EndaceProbe appliance so you can review captured network traffic quickly and securely. We have also included WireShark on each Endace InvestigationManager instance, allowing you to search over up to 100 EndaceProbes in parallel and present a single merged packet view inside Wireshark.

There is no need to download large PCAPs over the network, and no need to store them insecurely on your desktop PC or laptop to view in Wireshark. Viewing network packet captures is now lightning fast because EndaceProbe high-performance hardware serves the packets from the local RAID directly to a Wireshark instance hosted on the EndaceProbe.

If you’re a regular Wireshark user you will know that Wireshark doesn’t handle large PCAPs very well, just loading a 1GB file can take forever let alone a 100TB pcap. With Wireshark on EndaceProbe you can now quickly search hundreds of Terabytes of packet data to view or analyze important packets in Wireshark. The workflow is much faster and more secure. And Wireshark power users will be glad to know it’s a full Wireshark instance with all the useful features and decodes that you’ve come to know and love.

Here’s a sneak preview:

Wireshark on EndaceProbe with OSm7
With OSm 7.0, now you can go directly from EndaceVision to Wireshark hosted on EndaceProbes – without having to download large pcap trace files.

Endace + XSOAR = Nirvana for the SoC

Original Entry by : Cary Wright

Integrating Palo Alto Cortex XSOAR with the EndaceProbe Analytics Platform

By Cary Wright, VP Product Management, Endace


Cary Wright, VP Product Management, EndaceThis week we are announcing an exciting integration with Palo Alto Networks Cortex XSOAR, formerly Demisto. This integration provides XSOAR customers with automated playbooks that easily pull in packet-level evidence for fast, conclusive, and repeatable response to security incidents. This integration complements our existing partnership with Palo Alto Networks NGFW and Panorama so now you can access packet-level data across multiple Palo Alto solutions.

So what is this “Nirvana for the SoC” we are all striving for?

The most effective SoC teams I’ve seen are well-oiled machines, reviewing and resolving many potentially dangerous security incidents each day and neutralizing threats quickly and confidently. What makes these teams successful is a repeatable and well-understood process, based on evidence, backed by automation, with integrated workflows across a suite of best in class security tools.

These teams have a wide range of experience–from new recruits to seasoned experts–all highly motivated and working collaboratively to solve complex issues. This exceptional environment not only provides high levels of productivity and security, but it also is great for team morale, staff retention, and hiring. Adding new staff is streamlined because all the processes are documented and/or automated, workflows are simple, and less experienced hires can contribute quickly. I am sure you would agree this is the SoC team Nirvana that we are all striving for?

SoC teams are flying blind without network packet history at their fingertips. Sophisticated attackers do their best to cover their tracks by modifying server logs or deleting evidence. However,   packets don’t lie and can’t be tampered with. That’s why many SoC teams deploy EndaceProbe alongside their firewalls so they can turn to the packets to investigate their most challenging security incidents. It’s the evidence needed to know without a doubt what happened at 2pm last Tuesday afternoon when a security alert indicated a potential attack.

We integrated with Cortex XSOAR because we realized that many teams were missing the essential packet-level evidence required for fast and conclusive security investigations. XSOAR playbooks now automate the collection of packet evidence from any EndaceProbe in the deployment. Packet evidence is then archived and attached to a “case” or “war room” allowing multiple team members to contribute to the investigation at any time in the future.

The complete workflow can be integrated with the entire security tool suite including endpoint, network, SIEM, NGFW, and other security elements. And finally, these playbooks can be customized to suit the specific needs of the organization.

Check out the demo video on Palo Alto Network’s Fusion partner page to see this integration in action, and reach out if you’d like more information.

I am very proud of what our team has achieved with this integration to Cortex XSOAR. Our customers can now manage alerts across all sources using a standard process, take action on threat intel, and automate response for any security use case – resulting in significantly faster responses that require less manual review. I’m really looking forward to seeing our customers take advantage of this new capability to create their own SoC team Nirvana.

Happy hunting,

Cary

 


Packet Detectives Episode 2: The Case of the Unknown TLS Versions

Original Entry by : Michael Morris

Demystifying Network Investigations with Packet Data

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

As we discussed with Ixia and Plixer recently in our How to Combat Encrypted Threats webinar (which you can watch here if you are interested) newer versions – 1.2 and 1.3 – of TLS should be preferred over older versions – 1.0 and 1.1 – because they’re much more secure, and better protect data in flight.

But removing older versions of TLS from your network can be challenging. First, identifying which versions are actually being used. Second, identifying which servers and clients are using outdated versions. And lastly, updating any servers inside your network that are using older TLS versions, and potentially blocking access to servers outside the network using older versions too, all without causing your users to scream!

It’s not just users you need to worry about either. Potentially you may have IoT devices on your network that are still using older TLS versions.

Thankfully, if you have access to recorded network traffic there’s an easy way …

In this second installment of Packet Detectives, industry-renowned SharkFest presenter and all-round Wireshark guru, Betty DuBois, shows how you can quickly answer all these questions using Wireshark to analyze the TLS traffic on your network to see which hosts and clients are using which versions. She has even created a special, custom Wireshark profile you can download to make the analysis even easier!

The truth is in the packets …

We hope you find this video useful. Please let us know if you have ideas for other examples you’d like to see.


Network Security and Management Challenges Blog Series – Part 4

Original Entry by : Endace

Driving Economic Efficiency in Cyber Defense

Key Research Findings

  • Available budget, freedom to choose the best solutions and platform fatigue are all impacting on the ability of system architects to design and deploy the best solutions to meet the organization’s needs.
  • 78% of system architects reported platform fatigue is a significant challenge with 29% rating the level of challenge as high.
  • More than 90% of respondents reported that the process of acquiring and deploying security, network or application performance platforms is challenging, with almost half reporting that it is either extremely or very challenging.

Most of what’s written about cybersecurity focuses on the mechanics of attacks and defense. But, as recent research shows, the economics of security is just as significant. It’s not just lack of available budget – departments always complain about that – but how they are forced to allocate their budgets.

Currently, security solutions are often hardware-based, which forces organizations into making multiple CAPEX investments – with accompanying complex, slow purchase processes.

More than three-quarters of respondents to the survey reported that “the challenge of constraints caused by CAPEX cycle (e.g. an inability to choose best possible solutions when the need arises) is significant.”Almost half reported being stuck with solutions that have “outlived their usefulness, locked into particular vendors or unable to choose best-of-breed solutions.

Speed of deployment is also a significant challenge for organizations, with more than 50% of respondents reporting that “deploying a new security, network or application performance platform takes six to twelve months or longer.” 

As outlined in the previous post, existing security solutions are expensive, inflexible, hardware-dependent and take too long to deploy or upgrade. The process of identifying a need, raising budget, testing, selecting and deploying hardware-based security and performance monitoring solutions simply takes too long. And the cost is too high.

Contrast this with cyber attackers, who don’t require costly hardware to launch their attacks. They are not hampered by having to negotiate slow, complex purchase and deployment cycles. And often they leverage their target’s own infrastructure for attacks. The truth is that the economics of cybersecurity is broken: with the balance radically favoring attackers at the expense of their victims.

Reshaping the economics of cyberdefense

Companies have a myriad of choices when it comes to possible security, network performance and application performance monitoring solutions. Typically, they deploy many different tools to meet their specific needs. 

As discussed in the previous post, the lack of a common hardware architecture for analytics tools has prevented organizations from achieving the same cost savings and agility in their network security and monitoring infrastructure that virtualization has enabled in other areas of their IT infrastructure. As a result, budgets are stretched, organizations don’t have the coverage they’d like (leading to blindspots in network visibility) and deploying and managing network security and performance monitoring tools is slow, cumbersome and expensive.

Consolidating tools onto a common hardware platform – such as our EndaceProbe – helps organizations overcome many of the economic challenges they face:

  • It lets them reduce their hardware expenditure, resulting in significant CAPEX and OPEX savings. 
  • Reduced hardware expenditure frees up budget that can be directed towards deploying more tools in more places on the network – to remove visibility blind spots – and deploying tools the company needs but couldn’t previously afford.
  • Teams gain the freedom to choose what tools they adopt without being locked into “single-stack” vendor solutions. 
  • Teams can update or replace security and performance monitoring functions by deploying software applications on the existing hardware platform without a rip-and-replace. This significantly reduces cost and enables much faster, more agile deployment.

The cost of the hardware infrastructure needed to protect and manage the networks can also be shared by SecOps, NetOps, DevOps and IT teams, further reducing OPEX and CAPEX costs and facilitating closer cooperation and collaboration between teams.

For architects, a common hardware platform becomes a network element that can be designed into the standard network blueprint – reducing complexity and ensuring visibility across the entire network. And for IT teams responsible for managing the infrastructure it avoids the platform fatigue that currently results from having to manage multiple different hardware appliances from multiple different vendors.

Because analytics functionality is abstracted from the underlying EndaceProbe hardware, that functionality can be changed or upgraded easily, enabling – as we saw in the last post – far more agile deployment and the freedom to deploy analytics tools that best meet the company’s needs rather than being locked into specific vendors’ offerings.

Equally importantly, it extends the useful life of the EndaceProbe hardware too. No longer does hardware have to be replaced in order to upgrade or change analytics functionality. And as network speeds and loads increase, older EndaceProbes can be redeployed to edge locations and replaced at the network core with newer models offering higher-speeds and greater storage density. This ensures companies get maximum return on their hardware investment.

Lastly, their modular architecture allows multiple, physical EndaceProbes to be stacked or grouped to form centrally-managed logical EndaceProbes capable of scaling to network speeds of hundreds of gigabits-per-second and storing petabytes of network history.

A Final Word

This blog series has looked at the three key challenges – Visibility, Agility and Economic Efficiency (this post) – that enterprises report they face in protecting their networks and applications from cyber threats and costly performance issues. These challenges are interrelated: it is only by addressing all three that organizations can achieve the level of confidence and certainty necessary to effectively protect their critical assets.


Network Security and Management
Challenges – Part 3: Agility

Original Entry by : Endace

The Need for Agile Cyberdefense – and How to Achieve it

Key Research Findings

  • 75% of organizations report significant challenges with alert fatigue and 82% report significant challenges with tool fatigue
  • 91% of respondents report significant challenges in “integrating solutions to streamline processes, increase productivity and reduce complexity”.
  • Investigations are often slow and resource-intensive, with 15% of issues taking longer than a day to investigate and involving four or more people in the process.

In part two of this series of blog posts, we looked at Visibility as one of the key challenges uncovered in the research study Challenges of Managing and Securing the Network 2019.

In this third post, we’ll be discussing another of the key challenges that organizations reported: Agility

From a cybersecurity and performance management perspective, the term “Agility” can mean two different things. In one sense it can mean the ability to investigate and respond quickly to cyber threats or performance issues. But it can also refer to the ability to rapidly deploy new or upgraded solutions in order to evolve the organization’s ability to defend against, or detect, new security threats or performance issues. 

To keep things clear let’s refer to these two different meanings for agility as “Agile Response” and “Agile Deployment.”

Enabling Agile Response

In the last post, we looked at the data sources organizations can use to improve their visibility into network activity – namely using network metadata, combined with full packet data, to provide the definitive evidence that enables analysts to quickly and conclusively investigate issues. 

In order to leverage this data, the next step is to make it readily available to the tools and teams that need access to it. Tools can access the data to more accurately detect issues, and teams get quick and easy access to the definitive evidence they need to investigate and resolve issues faster and more effectively. 

Organizations report that they are struggling with two significant issues when it comes to investigating and resolving security or performance issues. 

The first is they are drowning in the sheer volume of alerts being reported by their monitoring tools. Investigating each issue is a cumbersome and resource-intensive process, often involving multiple people. As a result there is typically a backlog of issues that never get looked at – representing an unknown level of risk to the organization.

The second issue, which is compounding the alert fatigue problem, is that the tools teams use are not well-integrated, making the investigation process slow and inefficient.  In fact, 91% of the organizations surveyed reported significant challenges in “integrating solutions to streamline processes, increase productivity and reduce complexity.” The result is analysts are forced to switch from tool to tool (also known as “swivel chair integration”) to try and piece together a “big-picture” view of what happened.

Integrating network metadata and packet data into security and performance monitoring tools is a way to overcome both these challenges:

  • It gives teams access to a shared, authoritative source of truth about network activity. Analysts can pivot from an alert, or a metadata query, directly to the related packets for conclusive verification of what took place. This simplifies and accelerates investigations, making teams dramatically more productive and eliminating alert fatigue.
  • It enables a standardized investigation process. Regardless of the tool an analyst is using, they can get directly from an alert or query to the forensic detail – the packets – in the same way every time. 
  • It enables data from multiple sources to be correlated more easily. This is typically what teams are looking to achieve through tighter tool integration. Network data provides the “glue” (IP addresses, ports, time, application information etc.) that enables data from other diverse sources (log files, SNMP alerts etc.) to be correlated more easily. 

By leveraging a common, authoritative source of packet-level evidence organizations can create a “community of interoperability” across all their security and performance monitoring tools that drives faster response and greater productivity.

By integrating this packet-level network history with their security tools, SecOps teams can pivot quickly from alerts to concrete evidence, reducing investigation times from hours or days to just minutes.

Endace’s EndaceProbe Analytics Platform does this by enabling solutions from leading security and performance analytics vendors – such as BluVector, Cisco, Darktrace, Dynatrace, Micro Focus, IBM, Ixia, Palo Alto Networks, Splunk and others – to be integrated with and/or hosted on the EndaceProbe platform. Hosted solutions can access analyze live packet data for real-time detection or analyze recorded data for back-in-time investigations. 

The EndaceProbe’s powerful API-based integration allows analysts to go from alerts in any of these tools directly to the related packet history for deep, contextual analysis with a single click. 

The Road to Agile Deployment

The research showed that many organizations report their lack of visibility is due to having “too few tools in too few places in the network.” There are two reasons for this. One is economic – and we’ll look at that in the next post. The other is that the process of selecting and deploying new security and performance monitoring solutions is very slow.

The reason deploying new solutions is so slow is that they are typically deployed as hardware-based appliances. And as we all know, the process of acquiring budget for, evaluating, selecting, purchasing and deploying hardware can take months. Moreover, appliance-based solutions are prone to obsolescence and are difficult or impossible to upgrade without complete replacement. 

All these things make for an environment that is static and slow-moving: precisely the opposite of what organizations need when seeking to be agile and evolve their infrastructure quickly to meet new needs. Teams cannot evolve systems quickly enough to meet changing needs – which is particularly problematic when it comes to security, because the threat landscape changes so rapidly. As a result, many organizations are left with security solutions that are past their use-by date but can’t be replaced until their CAPEX value has been written down.

The crux of the problem is that many analytics solutions rely on collecting and analyzing network data – which means every solution typically includes its own packet capture hardware. 

Unlike the datacenter, where server virtualization has delivered highly efficient resource utilization, agile deployment and significant cost savings, there isn’t – or rather hasn’t been until now – a common hardware platform that enables network security and performance analytics solutions to be virtualized in the same way. A standardized platform for these solutions needs to include the specialized, dedicated hardware necessary for reliable packet capture and recording at high speed.

This is why Endace designed the EndaceProbe™ Analytics Platform. Multiple EndaceProbes can be deployed across the network to provide a common hardware platform for recording full packet data while simultaneously hosting security and performance analytics tools that need to analyze packet data. 

Adopting a common hardware platform removes the hardware dependence that currently forces organizations to deploy multiple hardware appliances from multiple vendors and frees them up to deploy analytics solutions as virtualized software applications. This enables agile deployment and gives organizations the freedom to choose the security, application performance and network performance solutions that best suit their needs, independent of the underlying hardware.

In the next post, we’ll look at how a common platform can help address some of the economic challenges that organizations face in protecting their networks. 


Network Security and
Management Challenges – Part 2: Visibility

Original Entry by : Endace

Stop Flying Blind: How to ensure Network Visibility

Network Visibility Essential to Network Security

Key Research Findings

  • 89% of organizations lack sufficient visibility into network activity certain about what is happening.
  • 88% of organizations are concerned about their ability to resolve security and performance problems quickly and accurately.

As outlined in the first post in this series, lack of visibility into network activity was one of the key challenges reported by organizations surveyed by VIB for the Challenges of Managing and Securing the Network 2019 research study. This wasn’t a huge surprise: we know all too well that a fundamental prerequisite for successfully protecting networks and applications is sufficient visibility into network activity. 

Sufficient visibility means being able to accurately monitor end-to-end activity across the entire network, and recording reliable evidence of this activity that allows SecOps, NetOps and DevOps teams to react quickly and confidently to any detected threats or performance issues. 

Context is Key

It might be tempting to suggest that lack of network visibility results from not collecting enough data. Actually, the problem is not possessing enough of the right data to provide the context that enables a coherent big-picture view of activity – and insufficient detail to enable accurate event reconstruction. This leaves organizations questioning their ability to adequately protect their networks.

Without context, data is just noise. Data tends to be siloed by department. What is visible to NetOps may not be visible to SecOps, and vice versa. It is often siloed inside specific tools too, forcing analysts to correlate data from multiple sources to investigate issues because they lack an independent and authoritative source of truth about network activity. 

Typically, organizations rely on data sources such as log files, and network metadata, which lack the detailed data necessary for definitive event reconstruction. For instance, while network metadata might show that a host on the network communicated with a suspect external host, it won’t give you the full details about what was transferred. For that, you need full packet data. 

In addition, network metadata and packet data are the only data sources that are immune to potential compromise. Log files and other data sources can be tampered with by cyber attackers to hide evidence of their presence and activity; or may simply not record the vital clues necessary to investigate a threat or issue.

Combining Network Metadata with Full Packet Data for 100% Visibility

The best possible solution to improving visibility is a combination of full packet data and rich network metadata. Metadata gives the big picture view of network activity and provides an index that allows teams to quickly locate relevant full packet data. Full packet data contains the “payload” that lets teams reconstruct, with certainty, what took place.

Collecting both types of data gives NetOps, DevOps and SecOps teams the information they need to quickly investigate threats or performance problems coupled with the ability to see precisely what happened so they know how to respond with confidence.

This combination provides the context needed to deliver both a holistic picture of network activity and the detailed granular data required to give certainty. It also provides an independent, authoritative source of network truth that makes it easy to correlate data from multiple sources – such as log files – and validate their accuracy.

With the right evidence at hand, teams can respond more quickly and accurately when events occur. 

In the next post in this series, we’ll look at how to make this evidence easily accessible to the teams and tools that need it – and how this can help organizations be more agile in responding to security threats and performance issues.