No connection between nodes on same LAN

Rob Townley rob.townley at gmail.com
Fri May 7 10:05:53 CEST 2010


On Fri, May 7, 2010 at 2:03 AM, Daniel Schall <Daniel-Schall at web.de> wrote:
> Thank you guys for your answers.
>
>
>
>> If you only have the nodes behind the NAT ConnectTo the node with a public
>> IP
>
>> address, they will never be able to discover that they are on the same
>> LAN.
>
>> However, if you add a "ConnectTo = Node2" to Node1's tinc.conf, and add
>
>> "Address = 192.168.0.102" to Node1's hosts/Node2 file, then it will make a
>
>> direct connection.
>
>
>
> Unfortunately, the nodes get their IP by DHCP, so a fixed address would not
> help.
>
> Setting up a local node behind each router with a static IP is also not
> possible, since my nodes are often on-the-go in foreign networks, where I am
> unable to set up static addresses by myself.
>
>
>
>> Tin does no autoconnect to each other, it does only connect if you set
>
>> "connectto" line in tinc.conf. But tinc does mesh after connect, so all
>
>> connections will be announced and packtes will find destination
>
>> automaticly.
>
>
>
> As far as I understood the documentation, “connectto” is used to connect
> nodes to each other in order to exchange meta-information, where each node
> is located at (IP:PORT).
>
> The actual “meshing” always occurs directly between all nodes, unless this
> is impossible due to firewalls etc.
>
> So shouldn’t it be enough to connect all nodes to a centralized one (Node3),
> where they can exchange their address-details in order to connect diretly to
> those addresses afterwards?
>
>
>
> I have attached a sketch of my issue.
>
> The upper half shows the physical setup with 3 nodes, one static and two
> behind a router.
>
> The bottom half shows the communication flow between node1 and node2. The
> packets follow the line to the NAT-Port, get passed to the other NAT-Port
> and back to the target node.
>
> The dottet line shows the desired flow, which would be much shorter than
> going over the router.
>
>
>
> In my opinion, tinc does not support multiple endpoints, hence Node3 saves
> only the publicly visible (NATed) endpoints for Node1 and Node2.
>
> The privately visible endpoints in the LAN are not saved and announced back.
> Therefore, Node1 and Node2 never know, they are on the same network.
>
>
>
> Do you have any advice for me, how to achieve the desired behavior?
>
> I’d suggest that each node announces its local endpoint to other nodes on
> connectto and the other node saves this endpoint together with the publicly
> visible one where it sees the packets coming from.
>
> That would enable each node to select the “best” endpoint to connect to the
> other node.
>
> This selection could either be algorithmic by calculating the shortest
> distance to the other endpoint or by trying out and selecting the one with
> the lowest round trip time.
>
>
>
>
>
> Best
>
>
>
> Daniel
>
> _______________________________________________
> tinc mailing list
> tinc at tinc-vpn.org
> http://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc
>
>

Thanks for the diagram  -  what did you use to create it?

First, what version of tinc are you using on your nodes  -  is it 1.0.13?

Nodes behind the same nat device have seemed to connect directly for
me (based on ping times).  Literally, the same layout as yours in
either router or switch mode.  A desktop and laptop one natted lan
connect to remote systems.  After a while, they would connect at lan
speeds.   That was under 1.0.11 and 1.0.12 compiled myself for Linux
and Windows.  But i wonder what happens to wireless nodes on a wlan in
which communication has been disabled at the Wireless Access Point
from one wifi node to another wifi node.

If you are using 1.0.13, then "Experimental StrictSubnets, Forwarding
and DirectOnly options, giving more control over information and
packets received from/sent to other nodes." may be getting in your
way.

I do not think you want DirectOnly enabled.
http://www.tinc-vpn.org/git/browse?p=tinc;a=commit;h=3e4829e78a3c7f7e19017d05611e5b69d5268119

If need be, the tinc information on other nodes could in theory be
stored in dns KEY, SRV and TXT records, ldap, or fusioninventory.   If
you could trust avahi mutlicast dns, then it would solve your problem
(but i don't think your problem exists in all situations).  That
ChaosVPN system may help as well, but i havent' had a chance to
examine the code and try that out.


More information about the tinc mailing list