tinc 0.3.3 vs. 1.0pre2
Axel Müller
axel.mueller at i2c-systems.com
Sat Jun 24 12:58:05 CEST 2000
--On Freitag, 23. Juni 2000 18:07 +0200 Guus Sliepen
<guus at warande3094.warande.uu.nl> wrote:
>> In the scenario in which we use tinc this feature is crucial. That's why
>> I modified tinc 0.3.3 which we are still running. As soon as you have
>> VPN (tinc outgoing) clients accessing whole networks through a VPN
>> (tinc incoming) server you have to tell the VPN client to send
>> everything to the VPN server (proxy) reagardless if this is a know VPN
>> destination.
>
> Your setup only works if EVERY host uses proxymode (or whatever it'll be
> called). That's not scalable either. The Right Thing(tm) would be that a
> tinc daemon can tell it wants to use proxymode to it's uplink, and that
> the uplink tells all other hosts about the new host, but sends it's own
> real IP address instead of the one from the new host. We'll implement that
> ASAP.
If you have ONE tinc acting as VPN server and MANY acting as VPN clients to
the VPN server then all off them have to use the VPN server as proxy. The
VPN clients don't have to know about each other as long as the VPN server
knows all current VPN clients to deliver the packets to the proper
real-IPs. I don't think that each VPN client has to know about each other
VPN client when using a proxy. I would therefore this not consider to be an
issue of the tinc meta protocol if normal routing is sufficient.
Maybe I explain a bit more about the environment we use tinc in:
Usually I (tap0=192.168.9.100) want to access any host on our corporate
intranet (192.168.75.0) through the firewall running tinc
(tap0=192.168.9.1). None of those hosts is running tinc but it is obvious
from the routing table that packets going to our intranet have to go
through tap0 respective tinc.
Look at the routing table I have on my Linux box at home connecting through
a dial-up link to the Internet and to our corporate network:
root at tomcat:/usr/src > netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt
Iface
192.168.75.0 192.168.9.1 255.255.255.0 UG 0 0 0
tap0
192.168.99.0 * 255.255.255.0 U 0 0 0
vmnet1
212.122.151.0 * 255.255.255.0 U 0 0 0
ippp0
192.168.9.0 * 255.255.255.0 U 0 0 0
tap0
loopback * 255.0.0.0 U 0 0 0 lo
default pg9-nt.frankfur 0.0.0.0 UG 0 0 0
ippp0
And here is the routing table on our VPN server (acting primarily as
Internet gateway, firewall):
lemon:~ # netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt
Iface
212.79.9.72 * 255.255.255.248 U 0 0 0
eth0
212.79.58.0 * 255.255.255.192 U 0 0 0
eth1
192.168.70.0 212.79.58.78 255.255.255.0 UG 0 0 0
eth0
192.168.24.0 * 255.255.255.0 U 0 0 0
eth7
192.168.75.0 * 255.255.255.0 U 0 0 0
eth3
192.168.9.0 * 255.255.255.0 U 0 0 0
tap0
192.168.60.0 * 255.255.255.0 U 0 0 0
eth2
loopback * 255.0.0.0 U 0 0 0 lo
default router.i2c-syst 0.0.0.0 UG 0 0 0
eth0
When I first looked at tinc I very much liked the clear way it is
operating: The VPN client encrypts packets for VPN destinations (our
Intranet) and puts them into other packets which will be routed to the VPN
destination (our VPN server). There the packet gets decrypted and the
orignal packet pops out, gets a new sender label (MAC of VPN server) and
gets routed to the destination host which not necessarily runs tinc. You
know tinc much better than I do but I think tinc handles all this in a nice
way (You won't disagree to that ;-)).
The only thing I missed in tinc was a "Proxy feature" which I added by this
simple patch.
My point is that I would rather consider a proxy setting in the tinc.conf
file than extending the meta protocol. (KISS = Keep It Safe & Simple)
>> P.S.: What are the "cp" at the beginning and end of each function about?
>
> CheckPoint. It's a macro (defined in util.h) that saves the current
> [...]
Thanks :-)
---
TINC development list, tinc-devel at nl.linux.org
Archive: http://mail.nl.linux.org/tinc-devel/
More information about the Tinc-devel
mailing list