Packet Detectives Episode 1: The Case of the Retransmissions

Original Entry by : Michael Morris

Demystifying Network Investigations with Packet Data

By Michael Morris, Director of Global Business Development, Endace


Michael Morris, Director of Global Business Development, Endace

As I talk to security analysts, network operations engineers and applications teams around the world a common theme regularly emerges: that troubleshooting security or performance issues with log or flow data alone just doesn’t cut it.

Most folks report spending way too many hours troubleshooting problems only to realize they just don’t have enough detail to know exactly what happened. Often this results in more finger pointing and unresolved issues. Too much time spent investigating issues also causes other alerts to start piling up, resulting in stress and undue risk to the organisation from a backlog of alerts that never get looked at.

On the other hand, those that use full packet capture data to troubleshoot problems report significantly faster resolution times and greater confidence because they can see exactly what happened on the wire.

Many folks I talk to also say they don’t have the expertise necessary to troubleshoot issues using packet data. But it’s actually much easier than you might expect. Packet decode tools – like Wireshark – are powerful and quite self-explanatory. And there’s tons of resources available on the web to help you out. You don’t need to be a mystical, networking guru to gain valuable insights from packet data!

Getting to the relevant packets is quick and easy too thanks to the EndaceProbe platform’s integration with solutions from our Fusion Partners like Cisco, IBM, Palo Alto Networks, Splunk and many others. Analysts can quickly pivot from alerts in any of those tools directly to related packet data with a single click, gaining valuable insights into their problems quickly and confidently.

To help further, we thought it would be useful to kick-off a video series of “real-world” investigation scenarios to show just how easily packet data can be used to investigate and resolve difficult issues (security or performance-related) in your network.

So here’s the first video in what we hope to make a regular series. Watch as industry-renowned SharkFest presenter and all-round Wireshark guru, Betty Dubois, walks us through investigating an application slow-down that is problems for users. The truth is in the packets …

We hope you find this video useful. Please let us know if you have ideas for other examples you’d like to see.


The Importance of Network Data to Threat Hunting (Part 3)

Original Entry by : Robert Salier

Frameworks and Regulations

By Robert Salier, Product Manager, Endace


Robert Salier, Product Manager, EndaceIn this, the third article in our series on threat hunting (see here for Part 1 and Part 2), we explore the frameworks and regulations most relevant to threat hunting.

These tend to fall into two categories: those that address cybersecurity at a governance level, and those that facilitate insight into individual attacks and help formulate appropriate defense actions.

Governance Level Frameworks and Regulations

The regulatory environment influences threat hunting, and cyber defense in general. In many countries, regulations impose obligations on disclosure of breaches, including what information must be provided, when, and to which stakeholders. This influences the information that an organization needs to know about a breach, and hence its choice of strategies, policies, processes and tools. These regulations generally require companies to disclose a breach to all customers that have been affected. However if an organization cannot ascertain which customers were affected, or even if any customers were affected, then they may need to contact every customer. The only thing worse than having to disclose a breach is having to disclose a breach without being able to provide the details your customers expect you to know.

There are a also a number of frameworks addressing cybersecurity at the governance level, which in some cases overlap with regulations, dealing with many of the same issues and considerations. Collectively, these frameworks and regulations help to ensure organizations implement good strategies, policies, processes and tools, e.g. …

  • Which systems and data is most important to the organization
  • What Information security policies should be in place
  • How cybersecurity should be operationalized (e.g. what organizational structure, security architecture and systems are most appropriate for the organization)
  • Incident management processes
  • Best practice guidelines

Prevalent frameworks and regulations include…

  • ISO 27000 Series of Information Security Standards
    A comprehensive family of standards for information security management, providing a set of best practices for information security management. Maintained by the International Standards Organization, it has been broadly adopted around the globe.
  • NIST Special Publication 800-53
    A catalogue of security and privacy controls for all U.S. federal organizations except those related to national security.
  • NIST Cybersecurity Framework
    A policy framework for private sector organizations to assess and improve their ability to prevent, detect, and respond to cyber attacks. It was developed for the USA, but has been adopted in a number of countries.
Frameworks to Characterize Attacks and Facilitate Responses

A number of frameworks have been developed to help describe and characterize attacker activity, and ultimately facilitate defense strategies and tactics.

Prevalent frameworks include…

  • Cyber Kill Chain
    Developed by Lockheed Martin, this framework was developed from a “kill chain” framework developed for military attack and defense. It decomposes a cyber attack into seven generic stages, providing a framework for characterizing and responding to attacks. Refer to this Dark Reading article for some discussion on the benefits and limitations of this framework.
  • Diamond Model
    This model describes attacks decomposing an attack into four key aspects, i.e. details of the adversary, their capabilities, the infrastructure they used, and the victim(s). Multiple attack diamonds can be plotted graphically in various ways including timelines and groupings, facilitating deeper insight.
  • Mitre Att&ck
    Developed by Mitre, Att&ck stands for “Adversarial Tactics, Techniques, and Common Knowledge”. It is essentially a living, growing knowledge base capturing intelligence gained from millions of attacks on enterprise networks. It consists of a framework that decomposes a cyber attack into eleven different phases, a list of techniques used in each phase by adversaries, documented real-world use of each technique, and a list of known threat actor groups. Att&ck is becoming increasingly popular, used by and contributed to by many security vendors and consultants.
  • OODA Loop
    Describes a process cycle of “Observe – Orient – Decide – Act”. Originally developed for military combat operations, it is now being applied to commercial operations.

The Importance of Network Data to Threat Hunting (Part 2)

Original Entry by : Robert Salier

Threat Hunting in Practice

By Robert Salier, Product Manager, Endace


Robert Salier, Product Manager, EndaceHunting for security threats involves looking for traces of attackers in an organization’s IT environment, both past and present. It involves creativity combined with (relatively loose) methodologies and frameworks, focused on outsmarting an attacker.

Threat Hunting relies on a deep knowledge of the Tactics, Techniques and Procedures (TTP’s) that adversaries use, and a thorough knowledge of the organization’s IT environment. Well executed threat hunts provide organizations with deeper insight into their IT environment and into where attackers might hide.

This, the second article in our series of blog posts on threat hunting (read Part 1 here), looks at how leading organizations approach threat hunting, and the various data, resources, systems, and processes required to threat hunt effectively and efficiently.

Larger organizations tend to have higher public profiles, more valuable information assets, and complex and distributed environments that present a greater number of opportunities for criminals to infiltrate, hide, and perform reconnaissance without detection. When it comes to seeking out best practice, it’s not surprising that large organizations are the place to look.

Large organizations recognize that criminals are constantly looking for ways to break in undetected and that it is only a matter of time before they succeed if they haven’t already. While organizations of all sizes are being attacked, larger organizations are the leaders in this proactive approach to hunting down intruders, i.e. “threat hunting”. They have recognized that active threat hunting increases detection rates over-relying on incident detection alone – i.e. waiting for alerts from automated intrusion detection systems that may never come.

Best practice involves formulating a hypothesis about what may be occurring, then seeking to confirm it. There are three general categories of hypothesis:

  • Driven by threat intelligence from industry news, reports, and feeds.
    e.g. newsfeeds report a dramatic increase in occurrences of a specific ransomware variant targeting your industry. So a threat hunt is initiated with the hypothesis that your organization is being targeted with this ransomware
  • Driven by situational awareness, i.e. focus on infrastructure, assets and data most important to the organization.
    e.g. a hypothesis that your customers’ records are the “crown jewels”, so hackers will be trying to gain access to exfiltrate this data

Having developed a hypothesis as a starting point, leading organizations rely on a range of tools and resources to threat hunt efficiently and effectively:

Historic Data from Hardware, Software and the Network
  • Infrastructure Logs from the individual components of hardware and software that form your IT environment, e.g. firewalls, IDS, switches, routers, databases, and endpoints. These logs capture notable events, alarms and other useful information, which when pieced together can provide valuable insight into historic activity in your environment. They’re like study notes that you take from a text book, i.e. highly useful, but not a full record, just a summary of what is considered notable. Also, be wary that hackers often delete or modify logs to remove evidence of their malicious activity.
  • Summarized network data (a.k.a. “packet metadata”, “network telemetry”). Traffic on network links can be captured and analyzed in real time to generate a feed of summary information characterizing the network activity. The information that can be obtained goes well beyond the flow summaries that Netflow provides, e.g. by identifying and summarizing activity and anomalies up to and including layer 7 such as email header information and expired certificates. This metadata can be very useful in hunts and investigations, particularly to correlate network traffic with events and activity from infrastructure logs, and users. Also, unlike logs, packet metadata cannot be easily deleted or modified.
  • Packet level network history. By capturing and storing packets from a network link, you have a verbatim copy of the communication over that link, allowing you to see precisely what was sent and received, with zero loss of fidelity. Some equipment such as firewalls and IDS’s capture small samples of packets, but these capture just a fraction of a second of communications, and therefore must be automatically triggered by a specific alarm or event. Capturing and storing all packets (“full packet capture”, “100% packet capture”) is the only way to obtain a complete history of all communications. Historically, the barriers to full packet capture have been the cost of the required storage and the challenge of locating the packets of interest, given the sheer volume of data. However, recent advances in technology are now breaking down those barriers.
Baselines

Baselines are an understanding of what is normal and what is anomalous.
Threat hunting involves examining user, endpoint, and network activity, searching for IoA’s and IoC’s – i.e. “clues” pointing to possible intrusions and malicious activity. The challenge is knowing which activity is normal, and which is anomalous. Without knowing that, in many cases, you will not know whether certain activity is to be expected in your environment, or whether it should be investigated.

A Centralized Location for Logs and Metadata

Because there are so many disparate sources of logs, centralized collection and storage is a practical necessity for organizations with substantial IT infrastructure. Most organizations use a SIEM (Security Information and Event Manager), which may have a dedicated database for storage of logs and metadata, or may use an enterprise data lake. SIEMs can correlate data from multiple sources, support rule-based triggers, and can feature Machine Learning algorithms able to learn what activity is normal (i.e. “baselining”). Having learned what is normal, they can then identify and flag anomalous activity.

Threat Intelligence

Threat intelligence is knowledge that helps organizations protect themselves against cyber attacks. It encompasses both business level and technical level detail. At a business level this includes general trends in malicious activity, individual breaches that have occurred, and how organizations are succeeding and failing to protect themselves. At a technical level, threat intelligence provides very detailed information on how individual threats work, informing organizations how to detect, block, and remove these threats. Generally this comes in the form of articles intended for consumption by humans, but also encompasses machine-readable intelligence that can be directly ingested by automated systems, e.g. updates to threat detection rules.

Frameworks and Regulations

The regulatory environment influences threat hunting, and cyber defense in general. In many countries, regulations impose obligations on disclosure of breaches, including what information must be provided, when, and to which stakeholders. There are a also a number of frameworks addressing cyber security at the governance level, which in some cases overlap with regulations, dealing with many of the same issues and considerations. Collectively, these frameworks and regulations help to ensure organizations implement good strategies, policies, processes and tools.

In the next article in this series, we explore the frameworks and regulations that apply to threat hunting, and which ensure organizations implement appropriate strategies, policies, processes and tools.