LocalDiscovery flip flopping and network design tips

Etienne Dechamps etienne at edechamps.fr
Tue Feb 14 19:22:37 CET 2017


Can you specify which version of tinc you're using? There are vast
differences in the way LocalDiscovery works between 1.0 and 1.1. The former
uses broadcast, the latter unicast to explicitly advertised local addresses.

You say that tinc_test_1's eth0 interface is configured with 10.240.0.4,
and tinc_test_2's eth0 interface is configured with 10.240.0.5. How are the
public addresses (104.154.59.151 and 104.197.132.141) configured? Are they
simply forwarded by some other router somewhere? Or are they configured on
some other network interface on the machine?

If you're using tinc 1.1, please provide the output of "tinc dump edges"
while your network is up.

On 14 February 2017 at 16:21, James Hartig <james at levenlabs.com> wrote:

> We are testing tinc inside Google Compute within a single region and an
> external region. Two boxes are created as follows:
> /etc/tinc/test/tinc_test_1
> Subnet = 10.240.0.0/16
> Subnet = 10.240.0.4/32
> Address = 104.154.59.151
>
> /etc/tinc/test/tinc_test_2
> Subnet = 10.240.0.0/16
> Subnet = 10.240.0.5/32
> Address = 104.197.132.141
>
> /etc/tinc/test/tinc.conf
> Name = $HOST
> AddressFamily = ipv4
> Interface = tun0
> LocalDiscovery = yes
>
> Those 2 boxes are in the same subnet and have addresses of 10.240.0.4 and
> 10.240.0.5, respectively, on their eth0 interface. Port 655 on tcp and udp
> is open to the world. The tinc_test_2 box has a ConnectTo of tinc_test_1.
> When tinc_test_2 is started, it prints out:
>   UDP address of tinc_test_1 set to 104.154.59.151 port 655
>   UDP address of tinc_test_1 set to 10.240.0.4 port 655
>   UDP address of tinc_test_1 set to 104.154.59.151 port 655
>   UDP address of tinc_test_1 set to 10.240.0.4 port 655
> repeatedly for a minute or so before finally settling on 10.240.0.4.
>
> Is there a reason it's flip flopping? Is that expected? Am I doing
> something wrong?
>
> Additionally, we have multiple Google Compute regions with their own
> subnets and external DCs with their own subnets and we'd like to install
> tinc on all servers but keep inner-Google traffic to the internal IPs and
> not over external IPs since it's an order of magnitude cheaper. My first
> thinking is a hub and spoke model. We have 2 boxes in each subnet that have
> port 655 open to the world, and all the other servers have 655 open to
> internal ips only. With LocalDiscovery (as well as IndirectData = yes on
> "non-public" servers) this works work pretty well, as far as I can tell.
> But it wouldn't solve the inner-Google traffic between subnets since Google
> Subnet0 would talk over public to Google Subnet1. What's the best way of
> doing something like this? I was thinking maybe 2 instances of tinc on the
> "public" boxes, but Google servers only have a single interface, eth0, that
> has the internal IP, so I couldn't listen on the external and internal IPs
> separately.
>
> Thanks!
>
>
>
>
>
> _______________________________________________
> tinc mailing list
> tinc at tinc-vpn.org
> https://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.tinc-vpn.org/pipermail/tinc/attachments/20170214/2ea67994/attachment.html>


More information about the tinc mailing list