Segfaults on connection loss
zorun
zorun at polyno.me
Wed Jun 25 10:01:03 CEST 2014
Here is summary of the previous mail. "Dual-stack", "IPv4-only", etc,
refer to the IP protocol the local node is using to try to connect to
the remote server (DNS resolution of the remote hostname).
The previous mail descrides two situations:
- IPv4-only: all is fine. After the timeout, Tinc purges things,
waits a bit, and tries to reconnect.
- Dual-stack, with data communication occuring over v4 before the
timeout: Tinc immediately tries to reconnect over v6, and crashes.
Some more information with two new situations:
- IPv6-only: all is fine, exactly as in the IPv4-only case.
- Dual-stack, with data communication occuring over v6 before the
timeout: Tinc immediately tries to reconnect over v4, and crashes.
Here is the log of this last situation:
Sending PING to REMOTE (2001:db8::1 port 656)
Got PACKET from REMOTE (2001:db8::1 port 656)
Sending PACKET to REMOTE (2001:db8::1 port 656)
Got PING from REMOTE (2001:db8::1 port 656)
Sending PONG to REMOTE (2001:db8::1 port 656)
Got PACKET from REMOTE (2001:db8::1 port 656)
REMOTE (2001:db8::1 port 656) didn't respond to PING in 5 seconds
Closing connection with REMOTE (2001:db8::1 port 656)
Sending DEL_EDGE to everyone (BROADCAST)
UDP address of REMOTE cleared
UDP address of OTHER_SERVER1 cleared
UDP address of OTHER_SERVER2 cleared
UDP address of OTHER_SERVER3 cleared
UDP address of OTHER_SERVER4 cleared
Sending DEL_EDGE to everyone (BROADCAST)
Trying to connect to REMOTE (XX.XX.XX.XX port 656)
Connected to REMOTE (XX.XX.XX.XX port 656)
Sending ID to REMOTE (XX.XX.XX.XX port 656)
Timeout from REMOTE (XX.XX.XX.XX port 656) during authentication
Closing connection with REMOTE (XX.XX.XX.XX port 656)
Segmentation fault (core dumped)
It looks like in the dual-stack case, the code path ignores the
"purge" part (visible in the IPv4-only log from the previous mail),
which leads to some partially cleaned-up edges being present.
More information about the tinc-devel
mailing list