Tag: tcp

Heroku tcp session information leakage

The Linux kernel exposes lots of interesting information via the /proc filesystem. For example, /proc/net/tcp and /proc/net/udp expose information about all tcp and udp sessions on the server.

In the usual Linux kernel style, these files are freely readable by all users on the Linux system. This becomes a bit of a problem on a multi-tenant system like Heroku. Heroku deploy many customers apps on each of their Amazon EC2 servers. So if you create an app skeleton, deploy to Heroku and shell out and you can see all the other apps tcp and udp sessions:


$ heroku run console
Running `console` attached to terminal... up, run.1971
irb(main):001:0> system("netstat | grep ESTAB | head -n 10")
tcp 0 0 22d87d80-deb7-415:33113 ip-10-40-86-97.ec:27018 ESTABLISHED
tcp 0 0 22d87d80-deb7-415:57256 ip-10-60-122:postgresql ESTABLISHED
tcp 0 0 22d87d80-deb7-415:59268 domU-12-31-3:postgresql ESTABLISHED
tcp 0 0 22d87d80-deb7-415:38536 collector6.newrelic:www ESTABLISHED
tcp 0 0 22d87d80-deb7-415:54459 ip-10-189-243-92.:27017 ESTABLISHED
tcp 0 0 22d87d80-deb7-415:50495 ip-10-151-25-11.ec:6242 ESTABLISHED
tcp 0 0 22d87d80-deb7-415:53192 ip-10-114-25:postgresql ESTABLISHED
tcp 0 0 22d87d80-deb7-415:47405 72.21.215.154:https ESTABLISHED
tcp 0 0 22d87d80-deb7-415:39011 collector6.newrelic:www ESTABLISHED
tcp 0 0 22d87d80-deb7-415:52237 ip-10-218-31-147.:42002 ESTABLISHED
=> true
irb(main):002:0>

If the netstat command (or system method) is not available, you can just read and parse the file in Ruby.

It’s an information leak, so I think it is quite low risk but still a hint that Heroku’s process separation still isn’t as strict as one might hope (I still remember David Chen’s pretty shocking discovery back in 2011)

There are a few ways to fix this kind of problem. Mandatory access control systems, like AppArmor can prevent processes reading these files. The Grsecurity security patches have lots of protections against proc based information leaking too, these files included.

Heroku’s response

I reported my findings to Heroku’s security team back in September 2012. I had some problems getting timely responses at first but after whining on Twitter (in March 2013) I got lots of attention from them (and an apology for lack of response). Complaining on Twitter is a clearly a powerful tool and to be used only wisely; and maybe sometimes when drunk.

Anyway, it seems they had been working hard on this and just failing to update me; it is quite a major change for them and a low priority bug imo. They’ve fixed it by properly virtualising networking too. It’s mentioned on their change log but it doesn’t go into too much detail. Important point is that it’s fixed.

TCP, NAT and 2MSL mismatch

We have a client that connects over the NHS internal network to a server hosted at our site. We have lots of clients like this, but these are slightly different because they NAT all their machines to one IP before it gets to us.

Recently they complained about connection problems and after lots of investigation we managed to get a packet capture of the problem (IPs changed of course):

 1  0.00 192.168.0.1 -> 10.0.0.254 TCP 2268 > 80 [SYN]
 2  0.00 10.0.0.254 -> 192.168.0.1 TCP 80 > 2268 [SYN, ACK]
 3  0.01 192.168.0.1 -> 10.0.0.254 TCP 2268 > 80 [ACK]
 4  0.08 192.168.0.1 -> 10.0.0.254 HTTP POST
 5  0.24 10.0.0.254 -> 192.168.0.1 TCP 80 > 2268 [ACK]
 6  0.23 192.168.0.1 -> 10.0.0.254 HTTP Continuation
 7  0.24 10.0.0.254 -> 192.168.0.1 HTTP HTTP/1.1 200 OK 1365
 8  0.24 10.0.0.254 -> 192.168.0.1 HTTP Continuation
 9  0.24 10.0.0.254 -> 192.168.0.1 TCP 80 > 2268 [FIN, ACK]
10  0.29 192.168.0.1 -> 10.0.0.254 TCP 2268 > 80 [ACK]
11  0.31 192.168.0.1 -> 10.0.0.254 TCP 2268 > 80 [FIN, ACK]
12  0.31 10.0.0.254 -> 192.168.0.1 TCP 80 > 2268 [ACK]
13  0.34 192.168.0.1 -> 10.0.0.254 TCP 2268 > 80 [ACK]
14 68.26 192.168.0.1 -> 10.0.0.254 TCP 2268 > 80 [SYN]
15 71.18 192.168.0.1 -> 10.0.0.254 TCP 2268 > 80 [SYN]
16 77.13 192.168.0.1 -> 10.0.0.254 TCP 2268 > 80 [SYN]
17 98.25 192.168.0.1 -> 10.0.0.254 TCP 2268 > 80 [RST, CWR]


(more…)

netfilter ip_conntrack_ftp and tls

Here I am attempting to connect to a server using lftp wondering why the firewall is blocking the incoming data connections, even though ip_conntrack_ftp has been working for years.

lftp supports tls, and so does the server I’m connecting to. This means the control connection is encrypted, so the netfilter ftp connection tracker can’t peek inside the packets to find out which ports to open up to allow the data connections. DUH. Ftp sucks.

Anyway, the only way I found to disable tls support in lftp is to add the following line to ~/.lftp/rc:

set ftp:ssl-allow false

2.6.7-8 default window scaling settings

My new Fedora installation was playing up with certain web sites resulting in *very* slow download (I could see the words drawing on my screen one by one). A ethereal dump showed a nice big window size, but max 120 byte packets and an ack for each one!

Well it turns out since about kernel 2.6.7, the default tcp_window_scale setting has been 7. The problem is, as was with ECN, there are lots of broken routers out there which break window scaling (they strip the TCP options, which is totally against RFC, and common sense). So the other end doesn’t know you’re scaling, so it’ll think you set (or you think it set) a tiny ikle window size.

Anyway I fixed it for now with a ‘net.ipv4.tcp_default_win_scale = 0’ in my /etc/sysctl.conf, but there is a new kernel patch floating around which seems to be a bit cleverer and will be due in the next kernel.