No subject


Mon Sep 20 15:00:41 CEST 2010


connection will remain a TCP connection unless it is broken and restarted.

Usually if I stop the client and wait for about 30 seconds to reconnect,
there is a much greater chance that the MTU probes work fine, and in about
30 seconds MTU is fixed to 1416.

Every time when the MTU probing fails, I see latency between 700 - 1000 ms
with 32 byte pings over a LAN.
Every time when the MTU probing does not fail, I latency between 1 - 3 ms
with 32 byte pings over a LAN.

I used iperf to measure throughput in various configurations to compare.
 The iperf server is a different device on the LAN.

donald at ubuntu:/opt/vmware/ovftool$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
** 3 tests directly on the lan, no VPN installed, as a baseline**

[  4] local 192.168.2.31 port 5001 connected with 192.168.2.243 port 2826
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec    112 MBytes  93.4 Mbits/sec
[  5] local 192.168.2.31 port 5001 connected with 192.168.2.243 port 2827
[  5]  0.0-10.0 sec    110 MBytes  92.4 Mbits/sec
[  4] local 192.168.2.31 port 5001 connected with 192.168.2.243 port 2828
[  4]  0.0-10.0 sec    111 MBytes  93.1 Mbits/sec

** 4 tests over the VPN, MTU probing failed as observed using level 5
debugging**

[  5] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 2859
[  5]  0.0-10.0 sec    232 KBytes    190 Kbits/sec
[  4] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 2862
[  4]  0.0- 9.5 sec  1.33 MBytes  1.18 Mbits/sec
[  5] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 2863
[  5]  0.0- 9.0 sec  1.48 MBytes  1.38 Mbits/sec
[  4] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 2864
[  4]  0.0-10.5 sec  72.0 KBytes  56.0 Kbits/sec

** 5 tests over the VPN, MTU probling successful as obsserved using level 5
debugging, after MTU set to 1416 in the debug output **

[  5] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 2936
[  5]  0.0-10.0 sec  19.3 MBytes  16.2 Mbits/sec
[  4] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 2938
[  4]  0.0-10.0 sec  21.8 MBytes  18.3 Mbits/sec
[  5] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 2939
[  5]  0.0-14.0 sec  8.41 MBytes  5.04 Mbits/sec
[  4] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 2942
[  4]  0.0-10.0 sec  13.8 MBytes  11.6 Mbits/sec
[  5] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 2943
[  5]  0.0-10.0 sec  14.2 MBytes  11.9 Mbits/sec

** 3 tests over the VPN without debugging to rule out performance loss due
to debugging.  MTU believed to be successful because ping times were 1-3ms.
 60 second delay before tests to allow MTU to settle **

[  4] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 2965
[  4]  0.0-10.0 sec  21.9 MBytes  18.4 Mbits/sec
[  5] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 2966
[  5]  0.0-10.0 sec  23.3 MBytes  19.5 Mbits/sec
[  4] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 2967
[  4]  0.0-10.0 sec  22.2 MBytes  18.6 Mbits/sec

So as observed, when Tinc defaults to TCP due to MTU probes failing there is
a significant reduction in throughput and latency.  When this happens in the
"real world" it's begun to become a little annoying to break and re-connect
several times until I get a good connection.

I haven't yet been able to figure out why the MTU probes only work some of
the time.  I thought it might be my WAN but now I know it's not.  There is
nothing in the path of this test to block them, and I haven't found any sign
of packet loss on the LAN.

I will do some testing later of setting PMTU directly and disabling
PMTUDiscovery to see if that will result in consistent behavior.  I would
really rather leave dynamic PMTU though because it's a really nice feature
when you connect with a laptop and you never know what network medium you'll
be on.

--90e6ba4fc490bbda8e0498bc1607
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

I've noticed some inconsistent performance with some of my tunnels and =
thought I would take some of the spare free time I have over the holidays t=
o try to figure out what the cause of that may be. =A0My environment in thi=
s case is my home LAN.<div>

<br></div><div>Please forgive my use of the terms &quot;server&quot; and &q=
uot;client&quot; in this email, I only use these terms to simply explanatio=
n.=A0<div><br></div><div>I statically assigned my server with an ip of 10.1=
0.10.1, and my client is set to 10.10.10.2. =A0 =A0The rest of my LAN uses =
<a href=3D"http://192.168.2.0/24">192.168.2.0/24</a>, so in this case I am =
using Tinc to create a tunnel to access the <a href=3D"http://192.168.2.0/2=
4">192.168.2.0/24</a> network from my client. =A0This is all on common swit=
ch fabric, no in-between firewalls of any kind involved, and no firewalls c=
onfigured on either Server or Client.<div>

<br></div><div>On the Server, Tinc is running on stripped down Centos 5.5 a=
s a virtual machine and all numbers given here are in this configuration. =
=A0</div><div><br></div><div>I have also tested this on a normal Centos 5.5=
 install, as well as Ubuntu 9.04, 9.10, 10.04, and 10.10. =A0All with and w=
ithout vmware tools installed. =A0Although there are performance difference=
s observed between the different builds, the behavior I describe has been t=
he same on all builds. =A0 The only thing I haven&#39;t tested is a native =
OS install.</div>

<div><br></div><div>Tinc is configured in switch mode.</div><div>The server=
 virtual adapter is bridged to the physical adapter using brctl. =A0The cli=
ent receives an address on the <a href=3D"http://192.168.2.0/24">192.168.2.=
0/24</a> network via DHCP from my internet router.</div>

<div><br></div><div>ifconfig of Tinc &quot;server&quot;</div><div><br></div=
><div><div>br0 =A0 =A0 =A0 Link encap:Ethernet =A0HWaddr 00:0C:29:58:B5:6B<=
/div><div>=A0=A0 =A0 =A0 =A0 =A0inet addr:192.168.2.4 =A0Bcast:192.168.255.=
255 =A0Mask:255.255.0.0</div>

<div>=A0=A0 =A0 =A0 =A0 =A0inet6 addr: fe80::20c:29ff:fe58:b56b/64 Scope:Li=
nk</div><div>=A0=A0 =A0 =A0 =A0 =A0UP BROADCAST RUNNING MULTICAST =A0MTU:15=
00 =A0Metric:1</div><div>=A0=A0 =A0 =A0 =A0 =A0RX packets:21384 errors:0 dr=
opped:0 overruns:0 frame:0</div><div>

=A0=A0 =A0 =A0 =A0 =A0TX packets:23987 errors:0 dropped:0 overruns:0 carrie=
r:0</div><div>=A0=A0 =A0 =A0 =A0 =A0collisions:0 txqueuelen:0</div><div>=A0=
=A0 =A0 =A0 =A0 =A0RX bytes:8452737 (8.0 MiB) =A0TX bytes:23819155 (22.7 Mi=
B)</div><div><br></div><div>br0:0 =A0 =A0 Link encap:Ethernet =A0HWaddr 00:=
0C:29:58:B5:6B</div>

<div>=A0=A0 =A0 =A0 =A0 =A0inet addr:10.10.10.1 =A0Bcast:10.10.10.255 =A0Ma=
sk:255.255.255.0</div><div>=A0=A0 =A0 =A0 =A0 =A0UP BROADCAST RUNNING MULTI=
CAST =A0MTU:1500 =A0Metric:1</div><div><br></div><div>eth0 =A0 =A0 =A0Link =
encap:Ethernet =A0HWaddr 00:0C:29:58:B5:6B</div>

<div>=A0=A0 =A0 =A0 =A0 =A0inet6 addr: fe80::20c:29ff:fe58:b56b/64 Scope:Li=
nk</div><div>=A0=A0 =A0 =A0 =A0 =A0UP BROADCAST RUNNING PROMISC MULTICAST =
=A0MTU:1500 =A0Metric:1</div><div>=A0=A0 =A0 =A0 =A0 =A0RX packets:196191 e=
rrors:0 dropped:0 overruns:0 frame:0</div>

<div>=A0=A0 =A0 =A0 =A0 =A0TX packets:38068 errors:0 dropped:0 overruns:0 c=
arrier:0</div><div>=A0=A0 =A0 =A0 =A0 =A0collisions:0 txqueuelen:1000</div>=
<div>=A0=A0 =A0 =A0 =A0 =A0RX bytes:145802768 (139.0 MiB) =A0TX bytes:28914=
683 (27.5 MiB)</div><div>=A0=A0 =A0 =A0 =A0 =A0Interrupt:177 Base address:0=
x1424</div>

<div><br></div><div>lo =A0 =A0 =A0 =A0Link encap:Local Loopback</div><div>=
=A0=A0 =A0 =A0 =A0 =A0inet addr:127.0.0.1 =A0Mask:255.0.0.0</div><div>=A0=
=A0 =A0 =A0 =A0 =A0inet6 addr: ::1/128 Scope:Host</div><div>=A0=A0 =A0 =A0 =
=A0 =A0UP LOOPBACK RUNNING =A0MTU:16436 =A0Metric:1</div>

<div>=A0=A0 =A0 =A0 =A0 =A0RX packets:222 errors:0 dropped:0 overruns:0 fra=
me:0</div><div>=A0=A0 =A0 =A0 =A0 =A0TX packets:222 errors:0 dropped:0 over=
runs:0 carrier:0</div><div>=A0=A0 =A0 =A0 =A0 =A0collisions:0 txqueuelen:0<=
/div><div>=A0=A0 =A0 =A0 =A0 =A0RX bytes:126456 (123.4 KiB) =A0TX bytes:126=
456 (123.4 KiB)</div>

<div><br></div><div>vpn =A0 =A0 =A0 Link encap:Ethernet =A0HWaddr FE:42:68:=
39:D9:1F</div><div>=A0=A0 =A0 =A0 =A0 =A0inet6 addr: fe80::fc42:68ff:fe39:d=
91f/64 Scope:Link</div><div>=A0=A0 =A0 =A0 =A0 =A0UP BROADCAST RUNNING PROM=
ISC MULTICAST =A0MTU:1500 =A0Metric:1</div>

<div>=A0=A0 =A0 =A0 =A0 =A0RX packets:13890 errors:0 dropped:0 overruns:0 f=
rame:0</div><div>=A0=A0 =A0 =A0 =A0 =A0TX packets:22405 errors:0 dropped:0 =
overruns:0 carrier:0</div><div>=A0=A0 =A0 =A0 =A0 =A0collisions:0 txqueuele=
n:500</div><div>=A0=A0 =A0 =A0 =A0 =A0RX bytes:5055429 (4.8 MiB) =A0TX byte=
s:21399229 (20.4 MiB)</div>

</div><div><br></div><div><br></div><div><div>[root at localhost ~]# brctl sho=
w</div><div>bridge name =A0 =A0 bridge id =A0 =A0 =A0 =A0 =A0 =A0 =A0 STP e=
nabled =A0 =A0 interfaces</div><div>br0 =A0 =A0 =A0 =A0 =A0 =A0 8000.000c29=
58b56b =A0 =A0 =A0 no =A0 =A0 =A0 =A0 =A0 =A0 =A0vpn</div>

<div>=A0=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 et=
h0</div></div><div><br></div><div><br></div><div><br></div><div>ipconfig of=
 windows xp &quot;client&quot;</div><div><br></div><div>Windows IP Configur=
ation</div>

<div><br></div><div><br></div><div>Ethernet adapter Local Area Connection:<=
/div><div><br></div><div>=A0=A0=A0Connection-specific DNS Suffix =A0. : loc=
al</div><div>=A0=A0 IP Address. . . . . . . . . . . . . . . . : 10.10.10.2<=
/div><div>

=A0=A0 Subnet Mask . . . . . . . . . . . . . . : 255.0.0.0</div><div>=A0=A0=
 Default Gateway . . . . . . . . . . . . :</div><div><br></div><div>Etherne=
t adapter Wireless Network Connection 2:</div><div><br></div><div>=A0=A0 Me=
dia State . . . . . . . . . . . : Media disconnected</div>

<div><br></div><div>Ethernet adapter Tinc:</div><div><br></div><div>=A0=A0 =
Connection-specific DNS Suffix =A0. : local</div><div>=A0=A0=A0IP Address. =
. . . . . . . . . . . . : 192.168.2.246</div><div>=A0=A0 Subnet Mask . . . =
. . . . . . . . : 255.255.255.0</div>

<div>=A0=A0 Default Gateway . . . . . . . . . : 192.168.2.1=A0</div><div><d=
iv><br></div><div><br></div><div>What I&#39;ve discovered using level 5 deb=
ugging is that often when a connection is made, MTU probes from the client =
are not responded to.</div>

<div><br></div><div>The tell-tail sign I&#39;ve seen every time is particul=
arly high latency.</div><div><br></div><div>I&#39;ve been able to reproduce=
 the condition not every, but nearly every time, if I manually start the cl=
ient (windows xp client) in a command prompt. =A0 Press Ctrl+c to stop the =
client, and then restart it after approximately 5 seconds.</div>

<div><br></div><div>The client will print the message &quot;No response to =
MTU probes from Server&quot;</div><div><br></div><div>And then basically al=
l traffic from then point on carries the message &quot;Packet for Server (1=
0.10.10.1 port 8002) larger than minimum MTU, forwarding via TCP&quot;</div=
>

<div><br></div><div>From what I can tell, no further attempts at MTU probes=
 are made, and the connection will remain a TCP connection unless it is bro=
ken and restarted.</div><div><br></div><div>Usually if I stop the client an=
d wait for about 30 seconds to reconnect, there is a much greater chance th=
at the MTU probes work fine, and in about 30 seconds MTU is fixed to 1416.<=
/div>

<div><br></div><div>Every time when the MTU probing fails, I see latency be=
tween 700 - 1000 ms with 32 byte pings over a LAN.</div><div>Every time whe=
n the MTU probing does not fail, I latency between 1 - 3 ms with 32 byte pi=
ngs over a LAN.</div>

<div><br></div><div>I used iperf to measure throughput in various configura=
tions to compare. =A0The iperf server is a different device on the LAN. =A0=
</div><div><br></div><div><div>donald at ubuntu:/opt/vmware/ovftool$ iperf -s<=
/div>

<div>------------------------------------------------------------</div><div=
>Server listening on TCP port 5001</div><div>TCP window size: 85.3 KByte (d=
efault)</div><div>---------------------------------------------------------=
---</div>

<div>** 3 tests directly on the lan, no VPN installed, as a baseline**</div=
><div><br></div><div>[ =A04] local 192.168.2.31 port 5001 connected with 19=
2.168.2.243 port 2826</div><div>[ ID] Interval =A0 =A0 =A0 Transfer =A0 =A0=
 Bandwidth</div>

<div>[ =A04] =A00.0-10.0 sec =A0 =A0112 MBytes =A093.4 Mbits/sec</div><div>=
[ =A05] local 192.168.2.31 port 5001 connected with 192.168.2.243 port 2827=
</div><div>[ =A05] =A00.0-10.0 sec =A0 =A0110 MBytes =A092.4 Mbits/sec</div=
><div>[ =A04] local 192.168.2.31 port 5001 connected with 192.168.2.243 por=
t 2828</div>

<div>[ =A04] =A00.0-10.0 sec =A0 =A0111 MBytes =A093.1 Mbits/sec</div><div>=
<br></div><div>** 4 tests over the VPN, MTU probing failed as observed usin=
g level 5 debugging**</div><div><br></div><div>[ =A05] local 192.168.2.31 p=
ort 5001 connected with 192.168.2.246 port 2859</div>

<div>[ =A05] =A00.0-10.0 sec =A0 =A0232 KBytes =A0 =A0190 Kbits/sec</div><d=
iv>[ =A04] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 2=
862</div><div>[ =A04] =A00.0- 9.5 sec =A01.33 MBytes =A01.18 Mbits/sec</div=
><div>[ =A05] local 192.168.2.31 port 5001 connected with 192.168.2.246 por=
t 2863</div>

<div>[ =A05] =A00.0- 9.0 sec =A01.48 MBytes =A01.38 Mbits/sec</div><div>[ =
=A04] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 2864</=
div><div>[ =A04] =A00.0-10.5 sec =A072.0 KBytes =A056.0 Kbits/sec</div><div=
><br></div><div>

** 5 tests over the VPN, MTU probling successful as obsserved using level 5=
 debugging, after MTU set to 1416 in the debug output **</div><div><br></di=
v><div>[ =A05] local 192.168.2.31 port 5001 connected with 192.168.2.246 po=
rt 2936</div>

<div>[ =A05] =A00.0-10.0 sec =A019.3 MBytes =A016.2 Mbits/sec</div><div>[ =
=A04] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 2938</=
div><div>[ =A04] =A00.0-10.0 sec =A021.8 MBytes =A018.3 Mbits/sec</div><div=
>[ =A05] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 293=
9</div>

<div>[ =A05] =A00.0-14.0 sec =A08.41 MBytes =A05.04 Mbits/sec</div><div>[ =
=A04] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 2942</=
div><div>[ =A04] =A00.0-10.0 sec =A013.8 MBytes =A011.6 Mbits/sec</div><div=
>[ =A05] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 294=
3</div>

<div>[ =A05] =A00.0-10.0 sec =A014.2 MBytes =A011.9 Mbits/sec</div><div><br=
></div><div>** 3 tests over the VPN without debugging to rule out performan=
ce loss due to debugging. =A0MTU believed to be successful because ping tim=
es were 1-3ms. =A060 second delay before tests to allow MTU to settle **</d=
iv>

<div><br></div><div>[ =A04] local 192.168.2.31 port 5001 connected with 192=
.168.2.246 port 2965</div><div>[ =A04] =A00.0-10.0 sec =A021.9 MBytes =A018=
.4 Mbits/sec</div><div>[ =A05] local 192.168.2.31 port 5001 connected with =
192.168.2.246 port 2966</div>

<div>[ =A05] =A00.0-10.0 sec =A023.3 MBytes =A019.5 Mbits/sec</div><div>[ =
=A04] local 192.168.2.31 port 5001 connected with 192.168.2.246 port 2967</=
div><div>[ =A04] =A00.0-10.0 sec =A022.2 MBytes =A018.6 Mbits/sec</div></di=
v><div><br></div>

<div>So as observed, when Tinc defaults to TCP due to MTU probes failing th=
ere is a significant reduction in throughput and latency. =A0When this happ=
ens in the &quot;real world&quot; it&#39;s begun to become a little annoyin=
g to break and re-connect several times until I get a good connection.</div=
>

</div></div></div><div><br></div><div>I haven&#39;t yet been able to figure=
 out why the MTU probes only work some of the time. =A0I thought it might b=
e my WAN but now I know it&#39;s not. =A0There is nothing in the path of th=
is test to block them, and I haven&#39;t found any sign of packet loss on t=
he LAN.</div>

<div>=A0=A0</div><div>I will do some testing later of setting PMTU directly=
 and disabling PMTUDiscovery to see if that will result in consistent behav=
ior. =A0I would really rather leave dynamic PMTU though because it&#39;s a =
really nice feature when you connect with a laptop and you never know what =
network medium you&#39;ll be on.</div>


--90e6ba4fc490bbda8e0498bc1607--


More information about the tinc mailing list