Rate limiting by request in Apache isn’t easy, but I finally figured out a satisfactory way of doing it using the mod-security Apache module. We’re using it at Brightbox to prevent buggy scripts rinsing our metadata service. In particular, we needed th e ability to allow a high burst of initial requests, as that’s our normal usage pattern. So here’s how to do it.
Install mod-security (on Debian/Ubuntu, just install the
libapache2-modsecurity package) and configure it in your virtual host definition like this:
SecRule IP:SOMEPATHCOUNTER "@gt 60" "phase:2,pause:300,deny,status:509,setenv:RATELIMITED,skip:1,nolog"
Header always set Retry-After "10" env=RATELIMITED
ErrorDocument 509 "Rate Limit Exceeded"
Continue reading Rate limiting with Apache and mod-security
On a busy Linux Netfilter-based firewall, you usually need to up the maximum number of allowed tracked connections (or new connections will be denied and you’ll see log messages from the kernel link this:
nf_conntrack: table full, dropping packet.
More connections will use more RAM, but how much? We don’t want to overcommit, as the connection tracker uses unswappable memory and things will blow up. If we set aside 512MB for connection tracking, how many concurrent connections can we track?
There is some Netfilter documentation on wallfire.org, but it’s quite old. How can we be sure it’s still correct without completely understanding the Netfilter code? Does it account for real life constraints such as page size, or is it just derived from looking at the code? A running Linux kernel gives us all the info we need through it’s
slabinfo proc file.
Continue reading Netfilter Conntrack Memory Usage