Netfilter Conntrack Memory Usage

On a busy Linux Netfilter-based firewall, you usually need to up the maximum number of allowed tracked connections (or new connections will be denied and you’ll see log messages from the kernel link this: nf_conntrack: table full, dropping packet.

More connections will use more RAM, but how much?  We don’t want to overcommit, as the connection tracker uses unswappable memory and things will blow up. If we set aside 512MB for connection tracking, how many concurrent connections can we track?

There is some Netfilter documentation on wallfire.org, but it’s quite old. How can we be sure it’s still correct without completely understanding the Netfilter code? Does it account for real life constraints such as page size, or is it just derived from looking at the code? A running Linux kernel gives us all the info we need through it’s slabinfo proc file.

We can peek at how the kernel is using RAM using the proc file /proc/slabinfo and clear this up.  The nf_conntrack entry from here tells us, on one particular firewall, that there are 26,702 active entries (or objects), that each object is 304 bytes in size and 13 of them fit in each slab (and that each slab is 1 kernel page).  So we know that conntrack entries take up 304 bytes each.  But if we’re going to be accurate, then we have to account for the overhead of the kernel page size.

The Linux kernel uses a slab memory allocator, so rather than allocating 304 bytes every time a conntrack entry is needed, they are allocated in “slabs” of one or more kernel pages which reduces memory fragmentation and improve performance.  When they’re done with, they’re not immediately freed – instead the memory is reused the next time another object of the same type is needed.

In the kernel we’re using, the page size is 4096 bytes. As slabinfo told us, 13 nf_conntrack objects fit in each slab and each slab takes up 1 page. 13 objects of 304 bytes is 3952 bytes in total, which leaves 144 bytes of waste per slab.  So every 13 objects we waste 144 bytes. So a nf_conntrack object consumes about 316 bytes on this box, giving us almost 1.7 million entries for our 512MB.

You can get your kernel’s page size with the command: getconf PAGESIZE. The slabtop program, installed on most modern GNU/Linuxes, shows the info from /proc/slabinfo it in a pretty table and lets you sort the values.

Comments

Do you know the status of the SLUB allocater?

http://lwn.net/Articles/229984/

Aaron Alpar says:

Thanks! Really clear basic analysis with similarly described extras (I came for conntrack, but found out how to look at the slabs in proc)

Martin Rusko says:

Thanks for this article. I’m trying to figure out, how does the hashsize fits into whole picture … so what is the memory consumption with regards to hashsize?

Mohammad Yosefpor says:

Great article. Thanks.
Simple formula from above:

memory usage = (nf_conntrack_entries / objperslab) * pagesperslab * page_size

Leave a Reply to thattommyhall Cancel reply