Endace Packet Forensics Files: Episode #35

Original Entry by : Michael Morris

Michael talks to Timothy Wilson-Johnston, Value Chain Security Leader, Cisco

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, EndaceWhat did we learn from the recent Log4J 2 vulnerability? How are security holes like this changing the way organizations think about deploying enterprise software solutions?

In this episode of the Endace Packet Forensic files Michael Morris talks with Timothy Wilson-Johnston about the Log4J 2 threat and how it is being exploited in the wild.

Timothy shares his thoughts about what Log4J 2 has taught us, and why organizations need to look at the bigger picture:

  • How can you better defend against vulnerabilities of this type
  • Why it’s so important to closely scrutinize solutions that are deployed – and make sure you have visibility into components that might be included with those solutions

Finally, Timothy discusses the importance of evaluating security vs function and why it is critical to have software inspection and validation processes to manage third-party risk to your business. Knowing what your vendors’ standards are and implementing a structured and repeatable process for evaluating vendors and solutions, is key to improving security maturity.

 

Other episodes in the Secure Networks video/audio podcast series are available here.



Making Packet Forensics Easy

Original Entry by : Cary Wright

Extracting files and other information from recorded packet data

By Cary Wright, VP Product Management, Endace


Cary Wright, VP Product Management, EndaceRecorded network traffic often holds vital clues required to resolve serious Cyber Incidents, or difficult network or application issues. The challenge has been locating a packet guru with the skills to search and analyse recorded traffic to extract the vital evidence needed to resolve the issue at hand. Such skilful analysts can be a rare breed, so we have taken that expertise and packaged it into our latest EndaceProbe software.

Recorded network traffic is now faster to search from within existing security tools such as SIEM or SOAR, and extraction of files and other important information can be done by any team member with the click of a mouse.

Getting to the Packets Faster

Our integrations with partner solutions focus on making it quicker and easier for analysts to find and analyze the packet data they need to investigate and resolve incidents.

Analysts can go from an issue or alert in their security or performance monitoring tools directly to the related packet data in InvestigationManager™ with a click of the mouse. That can save hours of time extracting, downloading and carving-up massive .pcap files so they can be opened up in Wireshark®.

With EndaceVision, analysts can rapidly zoom the timeline in-and-out to look at pre-cursor or post event activity to understand the full scope of any event or alert. Analysis of packet data is done on EndaceProbe appliances at the place it was recorded using hosted Wireshark without having to download or transfer large .pcap files across your network.

Making packet data even more useful

In the past packet analysis has required deep expertise and experience with tools like Wireshark or Zeek used to extract essential information from the recorded packet data. This has made it difficult for less experienced analysts to extract value from packet data and often meant issues requiring packet forensics piled up on the desks of senior analysts to investigate.

With our latest software release (OSm 7.1), we’ve made it easy for even junior analysts to extract useful information from recorded packet data without requiring deep knowledge of packet structures and decode tools. Simply select traffic of interest in EndaceVision and with a single click extract malicious files, or generate detailed log data from all the selected packets. This makes investigating historical events fast, and far more efficient. And it does not require deep expertise – which means even junior analysts can perform packet forensics tasks.

Some examples of tasks that are made easier with the latest Endace software release include:

  • Reconstructing malware file downloads or transfers so you can submit them to a sandbox or virus tool.
  • Understanding exactly what data left your network by reconstructing file exfiltration events.
  • Easily generating logs from recorded traffic to look for things like unusual DNS activity, port scans, DDoS events, or other threatening activity.

See how easy this is in the short 10 minute demonstration below (file extraction is at 08:15):

For more information on these great new features, or to arrange a demonstration to show how Endace could help you, contact us.


Multi-Tenancy introduced with OSm 7.1

Original Entry by : Cary Wright

Securely sharing packet capture infrastructure across multiple entities

By Cary Wright, VP Product Management, Endace


Cary Wright, VP Product Management, EndaceWe are proud to announce that EndaceProbe now supports Multi-Tenancy, “Woo-hoo” I hear you say! If you are an MSPP, MDR, Service Provider, or organisation with multiple departments, your SoC teams can now reap the benefits of having access to weeks or months of continuously recorded network traffic whilst sharing costs with many other likeminded SoC teams. Let’s dig into what Multi-Tenancy is and why it’s important.

At the most basic level, Multi-Tenancy is the ability to host multiple “entities” (e.g. multiple customers or multiple organizational divisions) on a single architecture at the same time. To put it another way, Multi-Tenancy offers a way to share the costs of a system or service across more than one entity. Multi-tenancy can mean different things depending on your domain of expertise:

  • Cloud providers are inherently multi-tenanted, serving millions of clients with shared compute
  • Operating systems often host multiple tenants on a single machine
  • Networks can supply connectivity to multiple teams or organizations via a single infrastructure.

All these scenarios have these necessary requirements in common:

  1. Each tenant’s data must remain private and accessible to only that authorized tenant, and
  2. Each tenant needs access to reliable, predictable, or contracted resources – such as bandwidth, compute, storage, security services, expertise, etc.

Multi-tenancy can help organizations to scale critical security services in a cost-efficient manner. A capable security architecture/service requires a significant capability investment and the expertise to operate it. By enabling this investment to be shared, it enables services to be made available to organizations that might otherwise not have been able to afford them.

A good example of where Multi-Tenancy can be extremely useful is the Security Operations Center (SoC). Typically, only large, well-funded organisations have the resources to build their own dedicated SoC. Multi-tenancy can enable multiple organizations to share a SoC, each benefiting from a strengthened security posture without carrying the full burden of the costs and effort involved.

This is the model underpinning outsourced MSSP services, for example. But it can also be an ideal model for larger organizations with multiple divisions that each need to maintain separation from each other. Or where multiple individual companies are owned by a common parent. It can also be a useful way to safely isolate a newly acquired company until its systems can be safely migrated or transferred over to the new owner’s infrastructure.

We see lots of areas where organizations are benefiting from this ability to  share infrastructure and services. So we are very pleased to announce that with the new OSm 7.1 software release, EndaceProbe Analytics Platform now also supports Multi-Tenancy for network recording.

This is especially useful where multiple tenants share the same network. A single EndaceProbe, or a fabric of EndaceProbes, can now be securely shared across multiple different organisations or tenants, while keeping the data for each tenant secure and private. EndaceProbes continuously record all network data on the shared network, but only provide each tenant with access to their own data.

In this case the tenancies are defined by VLANs, where each tenant has a VLAN, or set of VLANs, that carries only their traffic. When a user needs to investigate a security threat in their tenancy, they simply log into InvestigationManager to search, inspect, and analyse only the traffic that belongs to that tenancy. It’s as if each tenant has its own, wholly separate, EndaceFabric, dedicated just to its own tenancy.

This new capability is important for large organisations that service multiple departments, agencies, or divisions. Service providers, MSPPs, and MDRs which service multiple clients will also benefit from Multi-Tenancy to give each of its clients ready access to its own recorded network traffic for fast, secure, and private, security incident response.

We are very excited that this new Multi-Tenancy feature can help make Network Recording accessible for many more organizations, helping them to resolve incidents faster and with greater confidence.

For more information on this great new feature, or to arrange a demonstration to show how Endace could help you, contact us.


Endace Packet Forensics Files: Episode #33

Original Entry by : Michael Morris

Michael talks to NIST Fellow, Ron Ross

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

The dynamic nature and complexity of many organizations’ cyber infrastructure makes it hard enough to keep it running and performing, let alone to maintain the highest levels of security to protect your IP and data.  But do you know what the highest level of security standards are?

In this episode of the Endace Packet Forensic files I talk with NIST Fellow, Ron Ross, who shares how cyber security standards are evolving to keep pace with new threats and challenges. Ron highlights where he sees most organizations falling short and the highest priorities they should be addressing. He shares some insights into new standards and recommendations for protecting operational technologies which are becoming an attractive target for threat actors.

Finally, Ron talks about the need to move from a mindset of “prevention” to building “resiliency” into your security architecture to stay ahead of cyberthreats.

Other episodes in the Secure Networks video/audio podcast series are available here.


Triggered vs Continuous Capture

Original Entry by : Cary Wright

Can security teams afford to collect only a fraction of the evidence necessary to run down cyber attacks?

By Cary Wright, VP Product Management, Endace


Cary Wright, VP Product Management, EndaceSecurity teams rely on a variety of data sources for investigating security threats, including logs from network security, endpoint detection logs, logs from various servers and AAA devices in the network. But often these logs are not sufficient to determine the seriousness and extent of the threat – they may be missing information or may have been manipulated by an attacker. So many analysts turn to continuous Full Packet Capture (FPCAP) to fully analyse threat activity.

Continuous Full Packet Capture records every packet (regardless of rules) to disk in a large rotating buffer, ensuring that any and all threat traffic is recorded for later analysis. FPCAP will reveal all the various stages of an attack, including the initial exploit, command and control communications, surveillance and execute phases. FPCAP is very trustworthy because it cannot be manipulated by an attacker. And because all packets are being recorded we can be sure we have a record of all threat activity, whether that threat is detectable by security monitoring tools or not. Historically, it has been expensive to deploy enough FPCAP storage to get the recommended 30 days of look-back needed by security analysts. This is where triggered capture enters the story.

What is Triggered Packet Capture?

The concept of triggered packet capture is to capture only those packets related to detected or likely threats and discard the rest. The goal is to reduce the storage capacity needed to provide the necessary look-back period and thereby minimize the cost of storage.

One approach is to create rules or triggers to capture specific sequences or flows of packets and ignore anything that doesn’t match these rules. Recorded packets are stored in a unique packet capture (PCAP) file on a local file system or RAID array and at the same time an event notification may be logged to a SIEM or other security monitoring tools to ensure that PCAP can be found later.

Alternatively, the trigger rule might be a firewall signature or rule that causes a handful of packets to be recorded for that event to show what triggered the rule to fire. This is often an option provided by NGFWs but typically only very limited storage capacity is provided.

A second approach is to only capture those packets that your security monitoring tools don’t recognise as “normal” on the basis that this anomalous traffic could potentially indicate a threat or attack.

The Problems with Triggered Packet Capture

There are serious flaws with both the approaches above that will leave security teams lacking the PCAP evidence they need to properly investigate events.

Firstly, it’s really not possible to predict ahead of time what attacks you might be subject to. What about attacks that leverage the next Heartbleed SolarWinds or Log4J 2 type of vulnerability. How can you write triggers or rules that will ensure you reliably capture all the relevant packets related to future unknown threat vectors and vulnerabilities?  More than likely you will miss these attacks.

Secondly, capturing the packets relating to an initial trigger event such as an intrusion is useful, but it only tells part of the story. An initial exploit is just the beginning of an attack. You also need to capture downstream activity such as command and control communications, lateral movement, surveillance, exfiltration and other malicious activity. These will often be unique to each victim once the initial exploit has been successfully executed. And to investigate these conclusively you need access to full packet data. You can’t reconstruct exfiltrated data, or malware drops from metadata.

Lastly, attackers often try to camouflage their activity by using common protocols and traffic patterns to make their malicious activity look like normal user behavior. Capturing only “unknown” or “anomalous” traffic will most likely miss this activity.

These problems with triggered capture become very costly because your team have only part of the story, they must start guessing what might have happened and they must assume the worst case when reporting to customers, markets or affected parties. The lack of concrete evidence becomes very costly, just to save a few dollars on storage.

Only Full Packet Data provides a Complete Picture

During the incident response process, once an exploit has been detected analysts then need to look at any subsequent activity from the affected host or hosts – such as payload downloads, command-and-control (C2) communications, lateral movement to other hosts on the network, etc.

Only by examining full packet data can we really start to understand the full impact of a detected threat. If we only have triggered capture available and we don’t have any packets captured relating to activity after the initial exploit was detected, how can we tell whether there was a backdoor installed? Or tell what data was exfiltrated so we can understand what our legal obligations are to notify affected parties. These sort of questions cannot remain unanswered.

Advances in storage media, capture techniques and compression technology have made continuous full PCAP affordable now. So there’s no longer a need to compromise by deploying unreliable Triggered Capture to save a few bucks on disk storage. It’s simpler, faster, and much more robust to capture every packet so we are prepared to investigate any threat, new or old. Continuous full PCAP ensures teams can respond to incidents quickly and confidently, and in the long run this significantly reduces the cost of cybersecurity.


Endace Packet Forensics Files: Episode #32

Original Entry by : Michael Morris

Michael talks to Merritt Baer, Principal in the Office of the CISO at AWS

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

Is your organization trying to implement enterprise level security at scale and you’re not sure where to focus?

In this episode of the Endace Packet Forensic files I talk with Merritt Baer, Principal in the Office of the CISO at AWS, who shares her experience in how to design and build robust, dynamic security at scale. Merritt discusses what security at scale looks like, some of the things that are often missed, and how to protect rapidly evolving hybrid cloud infrastructures.  She highlights some common pitfalls that organizations run into as they shift workloads to cloud providers and how to pivot your SOC teams and tools to ensure you have robust security forensics in place.

Finally, Merritt examines how adopting SOAR platforms can help, and things you can do to prevent gaps and breakdowns in your security posture.

Other episodes in the Secure Networks video/audio podcast series are available here.


Log4j 2: A Week Look Back

Original Entry by : Michael Morris

Do you know if you have been attacked?

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

Log4J 2 - how can you see if you've been attacked?Many organizations have been scrambling this week to search their networks for instances of any use of Log4j 2 libraries and quickly patch applications, systems, appliances, or devices that might be using them. Lots of cycles are being spent reaching out to equipment and software vendors trying to determine if their systems or applications are potentially impacted and applying fixes and updates to stop potential compromises. The primary response for most security teams has been to apply patches and plug the holes.

But what exactly is the threat?  Apache Log4j 2 Java library is vulnerable to a remote code execution vulnerability (CVE-2021-44228) known as Log4Shell. This gives remote unauthenticated attackers the ability to execute arbitrary code loaded from a malicious server with the privileges of the Log4j 2 process.

It is nicely illustrated in this diagram from the Swiss Government Computer Emergency Response Team:

 

Log4J2 - JNDI attack process
(from: https://www.govcert.ch/blog/zero-day-exploit-targeting-popular-java-library-log4j/)

Any system with this vulnerability is now an entry point for the seeding or running of remote code execution that could then conduct any number of other nefarious activities.

I have been reading numerous articles and attending various seminars from threat intel teams such as Palo Alto Network Unit 42, that discuss the risk, scale, and severity of the potential risks to organizations from this zero-day threat. There are several key takeaways I have learned.

First, because of the prevalence of this vulnerability, literally millions of systems are at risk. Second, because of the scale of attacks leveraging this vulnerability there have already been several compromises and ransomware attacks. However, a lot of the current threat actor activity to this point appears to be reconnaissance and planting of additional malware that can be used later after threat actors have obtained initial access to a network and systems on it.

Our technology partner, KeySight Technology, has been tracking honeypot activity which shows huge numbers of exploitation attempts – demonstrating how many threat actors are scanning the internet looking for vulnerable systems.

Industry-wide there are already a huge number of bots scanning the internet simply looking for openings. Key advice from threat intel teams is to immediately isolate any impacted servers as they are truly open backdoors to the rest of your infrastructure. There are numerous tools out there to scan your environment for Log4j 2 use.

Anywhere that Log4j 2 is found you need to isolate and investigate for any potential compromises. It’s essential to put in place policies, rules, and filter protections to monitor outbound egress of traffic to unknown IP addresses. Apply filters and pay extra attention to common traffic protocols like LDAP, LDAPS, RMI, DNS as these are key protocols being leveraged for lateral movement and reconnaissance. Look for anomalous or unexpected traffic to and from potentially compromised systems if you are unable to isolate them.

Of course, you should also ensure your IDS’s or firewalls have updated rule sets for Log4j 2 so that you can block or detect any future attempts to infect your network. This needs to be done quickly so you can get on with the job of reviewing any damage that may have been done.

If you’re collecting network metadata on a SIEM such as Splunk or Elastic, the first place to start looking would be to search all http transactions for strings including JNDI calls. Our partner, Splunk, published a blog on how to do this here:

https://www.splunk.com/en_us/blog/security/log4shell-detecting-log4j-vulnerability-cve-2021-44228-continued.html

Once you have identified any JNDI calls, it’s critical to review the recorded network packet data to determine if any outgoing connections were made from potentially compromised servers.

EndaceProbes can capture weeks or months of packet data, allowing you to quickly review potential threats that may have occurred prior to the public announcement of the Log4j 2 vulnerability. Chris Greer published a very useful YouTube video of how to use Wireshark to identify and analyze a Log4j2 attack. Well worth watching:

Once you have identified connections that contain the JNDI string you can quickly examine any the subsequent outgoing connections from the affected host to see if successful contact was made with the malicious LDAP server, downloading java malware to infect your server. Knowing whether this step did or did not happen will save your team many days of incident response and allow them to focus on the servers that have been compromised.

Good luck with the Log4j 2 threat hunting! To learn more about how cost effective and simple it can be to have an always-on network packet capture platform integrated with the rest of your security tools to help you search for Log4J 2 and other zero-day attacks go to www.endace.com.