nfportscan
is a small open-source tool for analyzing
netflow records (for example
generated by Cisco routers/switches) which were captured by nfcapd
from the nfdump program suite.
With nfportscan
, you can filter out scans on one port over a range of IP addresses
and present the results in a well-arranged way. It was developed within the
Network Operations Center (NOC)
at the
Center for Computing and Communication
of
RWTH-Aachen University (Germany)
by
Alexander Neumann (fd0)
and
Florian Weingarten (fw42)
under the supervision of
Jens Hektor.
An older version can be found here: git.lochraster.org.
nfportscan
was developed with efficiency in mind (the nfcapd
files from our switches for a five minute interval are about 85
Megabytes each!).
Several files have been taken from the nfdump sourcecode, which is distributed under the BSD license. Therefore, we decided to distribute all the other files of this project under this license as well. Each file contains a header declaring the specific copyright and licensing information for that file.
nfportscan
is written in the C programming language. The package contains a Makefile
, which
should do all the compiling work. You will need OpenMP
because nfportscan
uses multithreading (since v0.52) to analyze the files (one thread for
each file).
You can get a list of available command line options with the -h
switch:
$ ./nfportscan -h
USAGE: nfportscan [OPTIONS] FILE [FILE] ...
-t --threshhold set dsthost minimum for an ip address to be reported
(default: 100)
-T --firstlast show timestamps of first and last sights of flow
-s --timeformat overwrite time string format
(default: "%d.%m.%y %H:%M:%S", strftime() syntax)
-D --lastduration show duration instead of last timestamp
(in combination with -T)
-H --sort-hosts sort by host destination count
-f --sort-flows sort by flow count
-i --sort-ip sort by host source ip
-P --sort-port sort by destination port
-b --sort-first sort by timestamp of first sight
-e --sort-duration sort by duration between first and last sight
-a --order-asceding sort list ascending
-d --order-desceding sort list descending
-p --processors set number of processors/threads to use (max: 8)
-F --filter apply filter before counting
-c --csv output data separated by TAB and NEWLINE
-v --verbose set verbosity level
-V --version print program version
-h --help print this help
Assume your nfcapd
files are stored in the data/
subdirectory.
You could for example scan all files from April 15th, 2009 from 15:00 to 16:00,
sorted by length of the scan, with the following command:
$ ./nfportscan -TDev data/nfcapd.2009041515*
threshhold is 100, sorting by duration (descending)
Thread 1: processing file data/nfcapd.200904151510
Thread 2: processing file data/nfcapd.200904151520
Thread 4: processing file data/nfcapd.200904151540
Thread 3: processing file data/nfcapd.200904151530
Thread 0: processing file data/nfcapd.200904151500
Thread 5: processing file data/nfcapd.200904151550
Thread 1: processing file data/nfcapd.200904151515
Thread 3: processing file data/nfcapd.200904151535
Thread 0: processing file data/nfcapd.200904151505
Thread 2: processing file data/nfcapd.200904151525
Thread 5: processing file data/nfcapd.200904151555
Thread 4: processing file data/nfcapd.200904151545
Total: scanned 26462014 flows, found 26451203 incident flows (99.96%)
sorting result list...
* 137.226.142.999 -> 80 (TCP): 818 dsts ( 6283 flows, 128772 pckts, 8789295 octs) (15.04.09 14:54:56, 65 min 01 sec)
* 137.226.113.9 -> 9001 (TCP): 333 dsts ( 4747 flows, 689736 pckts, 258386609 octs) (15.04.09 14:54:57, 64 min 59 sec)
* 137.226.28.99 -> 3/ 3 (ICMP): 656 dsts ( 1274 flows, 2684 pckts, 379248 octs) (15.04.09 14:55:40, 64 min 14 sec)
* 137.226.138.999 -> 80 (TCP): 1225 dsts ( 11238 flows, 228887 pckts, 17080277 octs) (15.04.09 14:55:47, 64 min 07 sec)
* 137.226.147.99 -> 27960 (UDP): 1423 dsts ( 2794 flows, 746650 pckts, 163998058 octs) (15.04.09 14:55:45, 64 min 07 sec)
* 137.226.138.999 -> 5121 (UDP): 530 dsts ( 6394 flows, 6855 pckts, 411300 octs) (15.04.09 14:55:48, 64 min 06 sec)
* 134.130.187.999 -> 80 (TCP): 186 dsts ( 1756 flows, 593730 pckts, 292961355 octs) (15.04.09 14:55:47, 64 min 03 sec)
* 134.130.133.999 -> 80 (TCP): 322 dsts ( 2683 flows, 161127 pckts, 13798131 octs) (15.04.09 14:55:38, 63 min 58 sec)
* 137.226.138.999 -> 23127 (UDP): 487 dsts ( 14732 flows, 18879 pckts, 1208553 octs) (15.04.09 14:55:57, 63 min 57 sec)
* 134.130.200.99 -> 80 (TCP): 122 dsts ( 1587 flows, 62465 pckts, 5108408 octs) (15.04.09 14:55:58, 63 min 56 sec)
* 134.130.55.999 -> 80 (TCP): 124 dsts ( 1311 flows, 244467 pckts, 13538479 octs) (15.04.09 14:55:54, 63 min 48 sec)
* 137.226.113.9 -> 443 (TCP): 277 dsts ( 3037 flows, 466413 pckts, 310237937 octs) (15.04.09 14:56:05, 63 min 46 sec)
* 134.61.41.999 -> 80 (TCP): 128 dsts ( 1561 flows, 93935 pckts, 10012863 octs) (15.04.09 14:56:10, 63 min 39 sec)
* 134.130.50.999 -> 80 (TCP): 178 dsts ( 1167 flows, 27817 pckts, 3181680 octs) (15.04.09 14:56:15, 63 min 39 sec)
* 134.130.71.999 -> 80 (TCP): 158 dsts ( 1921 flows, 18727 pckts, 5061782 octs) (15.04.09 14:56:04, 63 min 36 sec)
* 137.226.39.999 -> 3/ 3 (ICMP): 2405 dsts ( 3258 flows, 5329 pckts, 658500 octs) (15.04.09 14:56:27, 63 min 27 sec)
* 137.226.138.999 -> 3631 (UDP): 115 dsts ( 1375 flows, 35641 pckts, 3724240 octs) (15.04.09 14:56:24, 63 min 26 sec)
* 137.226.81.999 -> 3/ 1 (ICMP): 1657 dsts ( 4320 flows, 11103 pckts, 1365695 octs) (15.04.09 14:56:27, 63 min 25 sec)
* 134.130.240.99 -> 80 (TCP): 107 dsts ( 1289 flows, 15224 pckts, 2665758 octs) (15.04.09 14:56:29, 63 min 21 sec)
...
Note: For privacy reasons, the IP addresses in this listing were altered.
- Since the
nfcapd
files are possibly very large,nfportscan
will consume a lot of system memory (and is therefore likely to crash, if you scan too many files at once). You can prevent this by reducing the amount of data to be saved by applying filters (-F
).
The filter syntax is pretty straight-forward and similar to tcpdumps filter syntax. For example, you
can use (proto 6) and (dst port 22)
to get SSH connections. Please refer to the
nfdump project website (see "filter syntax") for a
detailed description.
Test system: Intel Xeon 8x3GHz, 12 GiB RAM running Fedora Linux 8 (32 Bit)
Some example cases:
- Analyzing 24 files (which corresponds to 2 hours of data), about 100 Megabytes each, (without additional filters) takes 2 minutes and 1 GiB of heap memory.
- Analyzing 288 files (24 GiB total, corresponding to one day) with filter "port 22" takes about 11 minutes and 14 MiB heap memory.
- Analyzing 288 files (24 GiB total, corresponding to one day) without any filters is not possible (malloc() dies), most likely because on a 32 bit system, each process can get about ~3-4 GiB RAM max.
(Of course those values depend strongly on the size of your nfcapd files and the number of filter hits!)