Tweaks for high-bandwidth tinc

Guus Sliepen guus at tinc-vpn.org
Sat Nov 13 00:18:51 CET 2010


On Fri, Nov 12, 2010 at 12:43:41PM -0600, Brandon Black wrote:

> >> I've been using tinc to do some high bandwidth VPNs [...]
> >
> > How high is the bandwidth exactly?
> 
> The scenario is pretty extreme, 100-250Mbps bandwidth spread over
> ~45-60K long-lived TCP sessions.

That is indeed an extreme scenario.

> IFF_ONE_QUEUE:
> 
> I'm still up in the air as to the utility of this flag.  I made a
> patch to make it an experimental configuration parameter in tinc.conf,
> and enabling it actually seemed to make matters worse in our
> particular scenario.  Reading source code and comments around the net
> (re: other VPN solutions that use TUN), it seems like most people
> think it's beneficial though, and possibly even enable it by default.
> It's probably best left as a experimental config setting defaulting to
> off (current behavior) for now until someone really digs into this
> issue deeper though.  I can't claim to really understand what's going
> on here.

Looking at the tun driver code, it seems setting this flag will bypass the
normal network packet queueing on the tun device. This makes sense in a way;
the packets will already have been queued on a real network device.

> SO_SNDBUF/SO_RCVBUF for the UDP socket:
> 
> I haven't actually bothered patching this in yet (because I have a
> sysctl workaround for now, it's less urgent to me), but they're
> obviously a good idea and I can add them in any patches I send up.

Sure.

> Packet sequence (/loss/replay/etc) issues:
> 
> I made a patch that made the late-packet bitmap size configurable, and
> increasing it (first to 512 packets, then to 2048) did do wonders for
> us.  If you've only got a handful of TCP connections flowing through a
> tunnel, given that TCP can only handle out-of-order to small degree
> (via SACK, etc), packets outside the default 128 range would be
> useless anyways.  However, if you've got many thousands, it's easy for
> say 1000 packets to go through the tunnel without any two of them
> being from a single TCP session, which really changes the game and
> makes you want to save late/early packets well outside the default 128
> range in case of reordering.

Indeed. Perhaps it could be autotuned; the bitmap size should be approximately
(bandwidth * TCP retransmission timeout / packet size) to allow any reordering
of packets before they are retransmitted by the higher layers anyway.
But in the mean time, a configurable bitmap size is also welcome.

> Another thing we get hit with (thanks to the underlying network at
> Amazon under brutal conditions, I think) is isolated packets jumping
> the queue by a lot.  E.g. in a given tunnel's sequence, we might see
> 1-2 packets suddenly arrive hundreds of seqnos ahead of the most
> recent seqno, but then they're immediately followed by all of the
> "missing" packets (many of which would then be dropped by tinc by
> default).

Aha. How often does that happen?

> So I also added a patch that makes tinc a little bit more
> resilient against this.  When the first far-future packet arrives
> (outside the size of the late window), that packet is dropped and the
> sequence-tracking is unaffected.
[...]
> I eventually realized that this whole late-packet tracking thing is
> really all about replay security.  Since I'm using tinc more to get
> around routing limitations than for security (I already disabled
> encryption and authentication too), for a scenario like mine the ideal
> solution is to simply pass all traffic and ignore the sequence numbers
> on the receiver side (and not even bother maintaining the late-packet
> bitmap).  So I updated my earlier patch to allow the late-packet
> window to be set to zero, which disables the related code in the
> net_packet receiver (and again improved the situation for us in the
> real world).

Ok, that's probably the right thing to do in your scenario.

> I'm not sure how much of this you want upstream or how you want it
> broken up, but let me know,

I think all of this can be merged. Each feature in its own patch would be very
nice, the order is less important.

-- 
Met vriendelijke groet / with kind regards,
     Guus Sliepen <guus at tinc-vpn.org>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: Digital signature
URL: <http://www.tinc-vpn.org/pipermail/tinc-devel/attachments/20101113/22bd089b/attachment.pgp>


More information about the tinc-devel mailing list