I just happened to look behind us
05 January 2015
I’m only going to discuss how to build your own solution, one you can integrate with your existing tools and processes. That means I’ll be ignoring any all-in-one options, whether commercial or open source. The aim here after all isn’t to dump (yet) more tools in front of already overloaded staff (or yourself).
I’m also not going to cover operating system choice or hardware sizing. The latter in particular becomes more complex as peak bandwidths increase and deserves to be far more than a footnote.
Basic IPFIX (or NetFlow) data is useful, and if all you can do is record this data then it will help you. However, I recommend that you look beyond this to the application layer data (often called AppFlow). This strikes a practical balance between full packet capture, and high level session information. AppFlow, for instance, will give you all the key HTTP headers for an HTTP session, or the contents of a DNS query. Some tools will even record the Shannon entropy of a session payload – allowing you to identify whether that session on port 443 is SSL, RDP or something else. I’ll talk more about the potential value of that information another time.
Your 2 main choices are between commercial and open source solutions, including:
- Many devices that can generate IPFIX data can also AppFlow data – check to see if your existing routers, switches, firewalls and perimeter protection devices can do this
- Many companies also produce dedicated IPFIX probes that can generate AppFlow
- Citrix NetScalar
- Cisco MediaNet
- LanScope StealthWatch
- Dell SonicWALL
- YaF is an open source tool to produce AppFlow augmented IPFIX records
- nProbe is a relatively low cost commercial solution (with GPL license) that has AppFlow augmented IPFIX support
Be warned that many network devices that generate IPFIX data do so at a lower priority, or only sample the traffic. In the case of the former you may find that that you’re missing records, in the latter you’re guaranteed to. You’ll often get higher quality output from a dedicated device, whether a commercial platform or one you build yourself.
Your choice will depend on whether you want something commercially supported, as well as what features you’re looking for.
If you are using a hardware IPFIX generator then you’ll need a collector. This is optional with most software solutions. Be careful though, as many collectors only support either IPFIX or NetFlow and not both, so ensure that your chosen collector supports the protocol your generators are producing. If you’re using AppFlow, ensure your collector also supports it.
Some options include:
- nProbe or ntop-ng
- Plixer Scrutinizer
- SonicWALL Scrutinizer
- Solarwinds NetFlow Traffic Analyzer
- ManageEngine NetFlow Analyzer
Here, your choice will depend on how you want to integrate the data, and whether you want an all in one tool or want to use your own choice of search and visualisation tools.
This is the simplest part of the package – reading the data in from the network and writing it out onto disk. I’m discounting dedicated capture devices here since the aim is to be able to run your chosen IDS across the packet store.
Practical notes here:
- Use pcap format – it’s widely supported and unlikely to cause any surprises
- Rotate the files regularly – once a minute is reasonable for a network peaking at 1 Gb/s
- You may also want to rotate at a maximum size to cap the time it takes to search any given file later
- Check you won’t exceed the limits of your file system (such as the number of files in a single directory) – I’d suggest a directory per day of pcap files
- Consider using a ramdisk to act as a buffer between the network and the disk – this can reduce the risk of lost packets during heavy I/O load – but you’ll need a big enough ramdisk for at least 2 full capture files (each being 7.5 GB per minute at 1 Gb/s)
- Structure your files to make it easier to find the traffic for a particular point in time and to purge old traffic
- Consider compressing the pcap after capture – for example with xz[i] you can save about 12%, providing you with a larger capture window
Most importantly, check your legal and regulatory constraints, as well as your organisation’s policies before you start. An often overlooked element in enterprise monitoring programmes is the legal data privacy perspective, so make sure not to fall foul of that potential pitfall.
You have a rich choice for packet capture, amongst others:
- daemonlogger (used by OpenFPC)
- netsniff-ng (used by SecurityOnion)
- moloch (Yara support, but is currently IPv4 only)
- streamdb (requires the Vortex IDS)
If you’re using Snorby as your IDS console then OpenFPC with daemonlogger is an obvious choice, since it’s fully supported. If you’re already using the Vortex IDS then streamdb is another obvious choice.
Otherwise, the first 5 all include the ability to rotate to a new file based upon the duration and/or size of the capture. They also include the date and time of the start of the capture file in the filename (some as human readable, some in Unix format). This is helpful when you only want to search across a limited time window, since instead of re-processing 6 TB of packet capture, you’re only re-processing 2 GB. Then you’ll get your answers in seconds, not hours.
If you want to roll your own solution, tcpdump is the one that provides you with the most power and flexibility since it can run a script when it rotates files. However, on Linux you can use inotify to monitor for files being closed and then call a script to process the files as you desire, providing the same results for other tools. Your mileage on other operating systems will vary.
Once you have the packet store, you’ll need to install your IDS of choice (Snort, Suricata, Bro, Vortex etc). This should be given the same basic configuration as your primary IDS. As you won’t be running that instance live however, you could increase limits on session reconstruction or search depths to improve the ability to detect activity.
You’ll then need a mechanism to feed rules to the IDS, run the IDS over the packet store (or a subset of the packet store) and retrieve the results. For simplicity, results should feed in to the same interface as the primary IDS – ideally this should appear to be just another sensor.
Remember though that the IDS can only operate on the packet store. To increase the time window you review you’ll want to use the extracted IPFIX data as your primary search, where you have extracted the key data. For some signatures you may be able to use the extracted IPFIX data as the entire search – though it will be up to you to integrate that with your IDS console.
I don’t suppose you could…?
Once you have both the raw traffic and the flow records then you need an interface to allow you to search all of this and return useful information. How you approach this will depend on the other tools you’re using and what your goals are, but options include:
- Sumo Logic
- WSO2 Business Activity Monitor
- SIEM platforms
Tools built on top of ElasticSearch include:
None of these tools will solve all your search requirements, but they’ll solve many of them. You will end up with an information flow that will look something similar to this:
With the benefit of hindsight we can see…
Now you have a starting point for the tools you need to instrument your network,and make use of that data. The goal is to improve the visibility of activity on your network, without unduly increasing the workload. Achieving that is not likely to be quick or easy, but the benefits can be significant.
Once you’ve completed the instrumentation of your perimeter, it will be time to look to the soft core of your network. There are many useful sources of information there that can improve both the visibility, and your understanding of the activity. I’ll talk more about those another time.
How confident are you that you know what’s happening, and has happened, on the network you’re responsible for?
[i] Installed on many unix type operating systems, or available via their package manager – see http://tukaani.org/xz/ for other platforms