There are plenty of reasons to tunnel one network connection through another without encryption: You might, for instance, want to transparently connect two separate networks (e.g. data centers) through another, or want to use a publicly reachable IP address behind your providers NAT. Whatever the reason is, what you are going to do is to encapsulate your data within IP packets to pass through the transit network to your other tunnel endpoint. While it is out of question that this works (this has been a solved problem for a while), this article will look at the performance of various tunneling methods on a very, very low-end consumer-grade device: A TP-Link WR841N v9 wireless router. This device costs less than €15 and is still a very capable router because it can run the versatile open-source OpenWRT operating system.
Test set-up
As test set-up we are using a TP-Link WR841N v9 router equipped with OpenWRT 15.05 firmware. The router is connected via a NETGEAR GS105 switch to a XenServer 6.5SP1 virtualization host server (4-core Intel Core i5-4460 @ 3.2 GHz, 32 GiB RAM) running a Ubuntu Server 15.10 virtual machine equipped with two cores and 4 GiB RAM.
Test procedure
After the setup of the different tunnel types, an iperf server is be started on the Ubuntu virtual machine and an iperf client is repeatedly run on the OpenWRT router with different Message Segment Size (MSS) parameters. The purpose of these different MSS parameters is to gauge how well the encapsulation process can transport data at high packet rates but with little content (e.g., VoIP, game traffic, lots of concurrent users). For each test, iperf is configured to perform a 120 seconds uni-directional TCP transmission and report the achieved throughput.
To run the test with maximum possible MSS size, iperf is called as follows:
$ iperf -c <UBUNTU-TUNNEL-IP> -t 120
For the reduced MSS value measurements this call is used:
$ for i in 1300 1100 900 700 500 300 100; do iperf -c <UBUNTU-TUNNEL-IP> -t 120 -M $i -m; done
Results
MSS size | No VPN | IPIP | GRE | PPTP | OpenVPN | L2TP_eth + UDP | IPIP+FoU |
---|---|---|---|---|---|---|---|
max / auto | 92,9 Mbit/s | 91,7 Mbit/s | 91,2 Mbit/s | 54,5 Mbit/s | 28 Mbit/s | 80,3 Mbit/s | 90,1 Mbit/s |
1288 bytes | 91,5 Mbit/s | 90,5 Mbit/s | 89,8 Mbit/s | 50,4 Mbit/s | 25,5 Mbit/s | 77,1 Mbit/s | 88,8 Mbit/s |
1088 bytes | 86,2 Mbit/s | 88,5 Mbit/s | 87,8 Mbit/s | 43,7 Mbit/s | 21,9 Mbit/s | 65,1 Mbit/s | 86,7 Mbit/s |
888 bytes | 84,3 Mbit/s | 86,2 Mbit/s | 85,6 Mbit/s | 37 Mbit/s | 18,2 Mbit/s | 56,7 Mbit/s | 84,1 Mbit/s |
688 bytes | 70,5 Mbit/s | 83,3 Mbit/s | 82,4 Mbit/s | 30,1 Mbit/s | 14,7 Mbit/s | 44,8 Mbit/s | 81,3 Mbit/s |
488 bytes | 53,9 Mbit/s | 77,5 Mbit/s | 76,8 Mbit/s | 22,4 Mbit/s | 10,7 Mbit/s | 33,7 Mbit/s | 73,8 Mbit/s |
288 bytes | 38,3 Mbit/s | 63,7 Mbit/s | 61,5 Mbit/s | 13,9 Mbit/s | 6,6 Mbit/s | 20,7 Mbit/s | 57,1 Mbit/s |
88 bytes | 12,9 Mbit/s | 27,2 Mbit/s | 26,1 Mbit/s | 4,6 Mbit/s | 2,1 Mbit/s | 6,6 Mbit/s | 25,3 Mbit/s |
Appendix
These are – in short form – the commands to set up the various tunnels.
IPIP tunnel setup
OpenWRT:
$ ip tunnel add ipip0 mode ipip remote <VM-IP> local <OPENWRT-IP> $ ip link set ipip0 up $ ip addr add 10.2.2.1/24 dev ipip0
Ubuntu:
$ ip tunnel add ipip0 mode ipip remote <OPENWRT-IP> local <VM-IP> $ ip link set ipip0 up $ ip addr add 10.2.2.2/24 dev ipip0
GRE Tunnel Setup
OpenWRT:
$ ip tunnel add ipip1 mode gre remote <VM-IP> local <OPENWRT-IP> $ ip link set ipip1 up $ ip addr add 10.3.3.1/24 dev ipip1
Ubuntu:
$ ip tunnel add ipip1 mode gre remote <OPENWRT-IP> local <VM-IP> $ ip link set ipip1 up $ ip addr add 10.3.3.2/24 dev ipip1
PPTP Tunnel Setup
OpenWRT:
/etc/config/network: [...] config interface 'vpn' option proto 'pptp' option server '<VM-IP>' option username 'vpn' option password 'vpn' option auto '0' option delegate '0' option defaultroute '0' option peerdns '0' option mtu '1462'
Ubuntu:
$ apt-get install pptpd /etc/pptpd.conf: option /etc/ppp/pptpd-options localip 10.4.4.1 remoteip 10.4.4.10-15 /etc/ppp/pptpd-options: name pptpd nodefaultroute lock nobsdcomp nologfd mtu 1462 /etc/ppp/chap-secrets: vpn * vpn *
OpenVPN Tunnel Setup
OpenWRT (openvpn-nossl package):
$ openvpn --dev tun --remote <VM-IP> --proto udp --mssfix 1472 --comp-lzo no --ifconfig 10.5.5.1 10.5.5.2
Ubuntu:
$ openvpn --dev tun --proto udp --mssfix 1472 --comp-lzo no --fast-io --ifconfig 10.5.5.2 10.5.5.1
L2TPv3 Ethernet “pseudowire” setup With UDP encapsulation
OpenWRT (kmod-l2tp-eth + ip-full packages):
$ ip l2tp add tunnel tunnel_id 1 peer_tunnel_id 1 \ udp_sport 5000 udp_dport 5000 encap udp \ local <OPENWRT-IP> remote <VM-IP> $ ip l2tp add session tunnel_id 1 session_id 1 peer_session_id 1 $ ip link set l2tpeth0 up mtu 1428 $ ip addr add 10.6.6.1/24 dev l2tpeth0
Ubuntu:
$ modprobe l2tp_eth $ ip l2tp add tunnel tunnel_id 1 peer_tunnel_id 1 \ udp_sport 5000 udp_dport 5000 encap udp \ local <VM-IP> remote <OPENWRT-IP> $ ip l2tp add session tunnel_id 1 session_id 1 peer_session_id 1 $ ip link set l2tpeth0 up mtu 1428 $ ip addr add 10.6.6.2/24 dev l2tpeth0
Foo-over-UDP (FOU) Setup
This case is set up slightly different than the rest as off-the-shelf OpenWRT 15.05 lacks the fou kernel module. This has been added here, so the test is run with a slightly different firmware.
OpenWRT:
$ ip fou add port 42424 ipproto 4 $ ip link add name fou0 type ipip \ remote <VM-IP> local <OPENWRT-IP> \ encap fou encap-sport 42424 encap-dport 42424 $ ip link set fou0 up mtu 1472 $ ip addr add 10.7.7.1/24 dev fou0
Ubuntu:
$ ip fou add port 42424 ipproto 4 $ ip link add name fou0 type ipip \ remote <OPENWRT-IP> local <VM-IP> \ encap fou encap-sport 42425 encap-dport 42424 $ ip link set fou0 up mtu 1472 $ ip addr add 10.7.7.2/24 dev fou0
How do these tunnel-technologies cope with limited upstream bandwidth?
Any noticeable differences in congestion control?
Is PMTUD working properly?
Thanks, I had the same question on a slightly different device. And now it’s answered. I’am going to use ipip …
Hi Bastian,
1. Limited upstream
All of these tunneling methods create overhead due to added headers. IPIP is the method with the least overhead as the original IP header is directly followed by the encapsulated IP packet. This overhead will reduce the net bandwidth of your limited upstream. Considering ‘coping’ I’m not sure I understand what you mean: When the upstream cannot transport as much data as is being sent, first the transmit queue(s) will fill up (buffer bloat) and once these cannot take further data, packets will be dropped. Considering that more overhead means more data to send through that upstream, the methods with less overhead are better. 😉
2. Congestion Control
Except for PPTP, all of these methods are stateless and just wrap the original packet with additional headers. This means they do not directly effect the congestion control mechanisms of, say, tunneled TCP packets. None of the described methods performs any congestion control of its own. With PPTP, there is a control channel, however. Unless precautions are taken (traffic shaping), it’s control messages may be lost in a (over)saturated network link and the peers might shut down the tunnel due to missing responses to lcp echo commands.
3. Path MTU Discovery
I’m not an expert at this, so take this with a grain of salt: OpenVPN cares for packet fragmentation, i.e., if the tunneled interface has a higher MTU than the network carrying the tunneled data (minus added headers) has, then OpenVPN should fragment the encapsulated data and reconstruct the proper packets on the receiving side. As for the other methods, I guess that an improperly set up tunnel MTU will lead to dropped packets.
J
Just curious as to why UDP encapsulation is selected with L2TPv3, as this would impact performance and overhead, not to mention forcing a lower MTU than necessary. Have I missed something, or would it not be a fairer comparison to use Pseudowire with IP encapsulation in performance tests across tunneling protocols?
You’re absolutely right: I should have tested plain L2TPv3 without UDP encapsulation as well for a proper comparison. I updated the article a bit to better reflect that this is not vanilla L2TPv3 but with UDP encap.
My personal motivation to do the test was to find a fast candidate that could work through NATs and to gauge how much slower that candidate would be in comparison to the IPIP / GRE champions.
Comparing IPIP to IPIP+FoU (which is somewhat comparable to L2TPv3 with/without UDP encapsulation), you can see that the difference is very small.
Re NAT requirement, I have had initial success with managed L2TPv3 tunnels in a traditional “VPN concentrator” client/server topology with clients behind NAT. Also a “proof of concept” test of same topology with the server NATed onto a VM seems to work. Of course there are many variants of NAT environments & YMMV, however NAT restrictions of Pseudowire itself should be the same to GRE or IPIP encapsulations.
The tunnel management component tested on top of the L2TPv3 kernel module was Tunneldigger https://github.com/wlanslovenija/tunneldigger a Python based tunnel broker utilizing OOB UDP/1701 management .
Hello Justus,
are there any news on this topic already?
Is one of this methods already considered for deployment on the server?
I assume you refer to the Freifunk VPN servers in Berlin – no, no news yet, but the development of a Kernel-space-based ‘VPN’ (it’s not private) is still WiP.
Interesting results. I was surprised to see that openvpn fairs so bad in comparison to the others. I guess you never bothered to check the send and receive buffers. If so, the default sizes are set to 64kB, which is easily saturated by a few Mbps connection ( 5-10Mbps). You would have to increase this to a value that matches your maximum expected throughput ( or link capacity). Then and only then would it be a fair game for openvpn.
let me add here some results of an IPSec-tunnel (ESP with NULL-crypto and SHA1-integrety) running trough a TPLink WR1043v2.
– MSS size 1288 bytes: 60.1 Mbits/sec
– MSS size 1088 bytes: 53.7 Mbits/sec
– MSS size 888 bytes: 44.8 Mbits/sec
– MSS size 688 bytes: 36.8 Mbits/sec
– MSS size 488 bytes: 27.5 Mbits/sec
– MSS size 288 bytes: 17.0 Mbits/sec
– MSS size 88 bytes: 5.45 Mbits/sec
so, having the bit more fast CPU of the WR1043 in mind, IPSec seems as fast as PPTP.
Hi, I have build a version of OpenWRT with the kmod-fou enabled but cannot get past this step…
ip link add name fou0 type ipip \
remote 95.3.2.1 local 192.168.1.1 \
encap fou encap-sport 42424 encap-dport 42424
RTNETLINK answers: Invalid argument
Any ideas what i’m missing?
the modules are loaded
root@OpenWrt:/# lsmod | grep fou
fou 7552 0
ip6_udp_tunnel 1591 1 fou
udp_tunnel 1923 1 fou
root@OpenWrt:/# lsmod | grep ipip
ip_tunnel 12175 1 ipip
ipip 3728 0
tunnel4 1790 1 ipip
@Jon,
A late one… Have you checked if your iproute2 version supports FOU, you need version 4.0 or heiher ( or patch yourself).
I have same situation @Jon. Yes, it has FOU supported from iproute2. ant ideas? thanks in advance.
root@OpenWrt:~# lsmod | grep fou
fou 7360 0
ip6_udp_tunnel 1463 1 fou
udp_tunnel 1859 1 fou
root@OpenWrt:~# ip -V
ip utility, iproute2-ss4.4.0-1-openwrt
root@OpenWrt:~# ip
Usage: ip [ OPTIONS ] OBJECT { COMMAND | help }
ip [ -force ] -batch filename
where OBJECT := { link | address | addrlabel | route | rule | neighbor | ntable |
tunnel | tuntap | maddress | mroute | mrule | monitor | xfrm |
netns | l2tp | fou | tcp_metrics | token | netconf }
OPTIONS := { -V[ersion] | -s[tatistics] | -d[etails] | -r[esolve] |
-h[uman-readable] | -iec |
-f[amily] { inet | inet6 | ipx | dnet | mpls | bridge | link } |
-4 | -6 | -I | -D | -B | -0 |
-l[oops] { maximum-addr-flush-attempts } | -br[ief] |
-o[neline] | -t[imestamp] | -ts[hort] | -b[atch] [filename] |
-rc[vbuf] [size] | -n[etns] name | -a[ll] | -c[olor]}
thx a lot, this helped by figuring out which ethernet tunnel protocol would best fit into a wireguard encrypted tunnel to be able to run batman-adv through the tunnel (and simulate the fastd behaviour)
goal is to have an encrypted layer2 tunnel.
This puzzle piece made my day 😉
more info for those who are interested :
https://forum.freifunk.net/t/wireguard-0-0-20161230-mit-linux-3-18-kernel-und-damit-gluon-v2016-2-2/14122/
@Jon & @Maedot – i have the same problem with LEDE. iproute2 4.4.0-9, GRE works, but with encap fou it throws a “RTNETLINK answers: Invalid argument”. It’s kinda sad, because now i need to use l2tp for a bit longer. under debian it is working as is should.
if anyone has some ideas, please reply 🙂
in LEDE FOU is disabled in kernel
# make kernel_menuconfig
Find thee ip tunneling FOU and enable it