LocalDiscovery flip flopping and network design tips

James Hartig james at levenlabs.com
Tue Feb 14 22:01:39 CET 2017


On Tue, Feb 14, 2017 at 3:43 PM, Etienne Dechamps <etienne at edechamps.fr> wrote:
> Hang on a second. I've just re-read your original message and I
> believe you are confused about what the "Subnet" option does. Again,
> it deals with addresses *inside* the VPN. In the configuration you
> posted you seem to be using 10.240.0.4 and 10.240.0.5 as internal
> addresses, but then your other statements (and especially your dump
> edges output) seem to indicate that 10.240.0.4 and 10.240.0.5 are
> *also* the physical addresses of the machines on their physical
> network interfaces.
>
> That's not going to work: as soon as tinc manages to establish the
> VPN, 10.240.0.4 and 10.240.0.5 become routable on *both* the virtual
> and physical interfaces, resulting in conflicts, and it all goes
> downhill from there. That would completely explain the weird phenomena
> you're reporting. If you make sure to use different IP subnets for VPN
> addresses and physical addresses, your problems should go away.

That's kind of intentional. I want tinc to be able to receive traffic
destined the local network over the tinc tunnel. I might be doing this
wrong and obviously I'm open to suggestions.

test_tinc_1 has an internal IP of 10.240.0.4/16
test_tinc_1 config is:
Subnet = 10.240.0.0/16
Subnet = 10.240.0.4/32
Address = 104.154.59.151

I want other nodes being able to talk to test_tinc_1 to be able to
talk to the network that test_tinc_1 is on. I might not need "Subnet =
10.240.0.0/16" if I expect all of the nodes in 10.240.x.x to have tinc
installed, right? Will it still forward packets onto the other tinc
instances if I just have "Subnet = 10.240.0.4/32"? I think I need tinc
because otherwise if a server on 10.240.x.x doesn't have tinc, it
won't know how to route the packet back to 10.80.0.0/16 ip (see
below). When I create the actual tun0 interface for test_tinc_1, I use
a useless subnet: 192.168.0.1/24.

Here's how I have it working and how I envisioned it working:

(keeping the same test_tinc_1 as above)
test_other_1 has an internal IP of 10.80.0.2/16
test_other_1 config is:
Subnet = 10.80.0.0/16
Subnet = 10.80.0.2/16
Address = 128.227.195.201

test_other_1's tun0 interface has the IP address 192.168.0.2/24 and
after it comes up I do:
ip route add 10.240.0.0/16 dev tun0

test_tinc_1's tun0 interface has a similar route for 10.80.0.0/16

When test_other_1 comes up, test_other_1 can talk to 10.240.0.4 and
other 10.240.x.x servers as long as they have tinc installed (and
their tun0 interfaces having 192.168.0.x addresses). If I used
iptables masquerading then I could get it talking to non-tinc servers
in 10.240.x.x, but some of our applications rely on the source IP to
be connectable (like mongo, cassandra, etc) so I can't use
masquerading.

I'm definitely open and hopeful for a better way of doing this.
Essentially we want to extend the existing networks out to all of our
boxes. So if we have 4 DCs with subnets 10.240.0.0/16, 10.260.0.0/16,
10.80.0.0/16, 10.100.0.0/16, we want all of the boxes in each of those
DCs to use their internal IPs to talk to any other server's internal
IPs without masquerading. We get this for free with Google Compute for
3 of those subnets but we can't use Google Compute for all of our
servers (developer machines, other non-google regions, etc), so that's
why were looking into tinc. We are currently using openvpn for this
but we want something that is mesh so we can handle something like the
openvpn master server going down.

> On 14 February 2017 at 20:36, Etienne Dechamps <etienne at edechamps.fr> wrote:
>> On 14 February 2017 at 18:59, James Hartig <james at levenlabs.com> wrote:
>>> When you say "and to the local network" what IP does it try to send to
>>> on the local network? The subnet address?
>>
>> No. The Subnet option deals with routing *inside* the VPN, not the
>> underlying "real" network.
>>
>> In tinc 1.1, the address that local discovery probes are sent to is
>> the local address of the recipient node, as determined by the socket
>> local address of its metaconnection. That's the address shown next to
>> "local" in the dump edges output. In your case the local address is
>> advertised correctly - there is no problem there.


More information about the tinc mailing list