2018-08-16, 06:16 AM
(2018-08-11, 10:21 AM)PaddyB Wrote: Updated and this from running overnight >
Code:kplex 1436 pi 5u IPv4 17290 0t0 TCP *:30330 (LISTEN)
kplex 1436 pi 9u IPv4 19599 0t0 TCP localhost:30330->localhost:57696 (CLOSE_WAIT)
kplex 1436 pi 10u IPv4 19603 0t0 TCP 10.10.10.1:30330->10.10.10.1:40634 (ESTABLISHED)
kplex 1436 pi 11u IPv4 19734 0t0 TCP localhost:30330->localhost:45436 (CLOSE_WAIT)
kplex 1436 pi 12u IPv4 83155 0t0 TCP localhost:30330->localhost:51624 (CLOSE_WAIT)
kplex 1436 pi 13u IPv4 104581 0t0 TCP localhost:30330->localhost:53076 (CLOSE_WAIT)
kplex 1436 pi 14u IPv4 107982 0t0 TCP localhost:30330->localhost:53594 (CLOSE_WAIT)
kplex 1436 pi 15u IPv4 111659 0t0 TCP localhost:30330->localhost:56174 (ESTABLISHED)
node-red 1453 pi 20u IPv4 17367 0t0 TCP 10.10.10.1:40634->10.10.10.1:30330 (ESTABLISHED)
node 10278 pi 48u IPv4 119362 0t0 TCP localhost:56174->localhost:30330 (ESTABLISHED)
So better but not perfect: still a few of those pesky CLOSE_WAIT connections.
A scenario that could produce this is SK server crashing for some unrelated reason and getting restarted by systemd.
This is actually a realistic case re: kplex and CLOSE_WAIT in general: a crashing client that gets restarted will produce dangling connections.
Do you see SK server restarting in syslog?
You can also get SK tcp provider debug logging by running the server with environment variable DEBUG set
Code:
DEBUG=signalk-provider-tcp
This will log all connects, disconnects, errors and reconnects.