On a busy Linux Netfilter-based firewall, you usually need to up the maximum number of allowed tracked connections (or new connections will be denied and you’ll see log messages from the kernel link this:
nf_conntrack: table full, dropping packet.
More connections will use more RAM, but how much? We don’t want to overcommit, as the connection tracker uses unswappable memory and things will blow up. If we set aside 512MB for connection tracking, how many concurrent connections can we track?
There is some Netfilter documentation on wallfire.org, but it’s quite old. How can we be sure it’s still correct without completely understanding the Netfilter code? Does it account for real life constraints such as page size, or is it just derived from looking at the code? A running Linux kernel gives us all the info we need through it’s
slabinfo proc file.
Continue reading Netfilter Conntrack Memory Usage
Tim Dobson very kindly recorded and uploaded my talk on the Ukepedia at Barcamp Leeds last Saturday.
For those of your with short attention spans, I finally get started with the talk at about 2mins 30, and start singing the first article, Otitis Media, at about 7mins.
Of all the WordPress installations I manage, two of them bring in a rather large number of hits.
To speed up WordPress I usually just enable the MySQL query cache and install the eaccelerator PHP opcode cacher. On one particular box, an Intel 1.3Ghgz PIII this increased performance from around 3 requests per second to around 10.
Recently I came across the WP-Cache plugin for WordPress. This takes the finished output from any given wordpress request and caches it to disk, serving directly from the static cache for the next hour (configurable). Any new posts or comments in the mean time immediately mark the cached version stale, so you don’t need to wait around for an hour.
On the same hardware and blog, this increases performance from 10 requests per second to over 250. A 2500% increase in speed.
Continue reading High performance WordPress
When controlling access to files on a webserver developers often use the web application itself as a file server. The request comes in, the script checks for some session authentication variable or something, then streams the file from disk (hopefully from outside the webroot) to the browser.
The problem with this from a performance standpoint is that a thread/process of the web application has to be running for the entire duration of the download. With a busy webserver serving many concurrent downloads, this is an immense overhead. The web server itself should be orders of magnitude faster at serving files directly than via a web application, but you can’t just stick the files in a different directory and hope nobody finds the secret urls. The new web server on the block, Lighttpd, has some clever solutions for this problem.
Continue reading Lighttpd and Ruby on Rails: Secure and Fast Downloading
My earlier post about Turck-mmcache is now deprecated. Turck-mmcache has not been actively developed in quite a while. eAccelerator is a fork of Turck-mmcache and has been actively developed by a new team.
eAccelerator fixes all the PHP crashing errors I had, and adds support for newer version PHP too (including PHP 5.1 in their latest dev snapshot, which I’ve had working perfectly btw)
Continue reading eAccelerator php speederupper