-
Notifications
You must be signed in to change notification settings - Fork 65
Description
Describe the Bug
Within few days I lost partially connectivity and so new connections being impossible to made because of network blocking DNS resolver of my choice.
While debugging what is working and what not (already established connections does persist so Newt provided networks from within my network remain active from outside as they already exist) I restarted the DNS on router and changed DNS severall times.
And it does happen that the Newt can't recover and stay stuck, taking all CPU resources. Same behavior on 2 machines. I noticed because both machines start spinning fans on high noise.
The WireGuard tunnels connected automatically all of them, only all Newt I have stuck like this and needed manual restart each one of them.
In all logs I have can see only two lines (x 10):
ERROR: 2026/02/15 22:23:26 Failed to resolve endpoint: DNS lookup failed: lookup sub.domain.xyz on 127.0.0.11:53: read udp 127.0.0.1:37338->127.0.0.11:53: i/o timeout
INFO: 2026/02/15 22:23:26 Connecting to endpoint: sub.domain.xyz
And nothing more, Newt on 100% resources.
Or:
INFO: 2026/02/17 20:44:27 Connecting to endpoint: sub.domain.xyz
ERROR: 2026/02/17 20:44:27 Failed to resolve endpoint: DNS lookup failed: lookup sub.domain.xyz on 127.0.0.11:53: server misbehaving
And nothing more, Newt on 100% resources.
One machine does have 2 networks with Newt and second is with one.
Seems to me the Newt stop logging or there is some counter for attempts to be made - and it stop doing them after a while?
Environment
- OS Type & Version: (e.g., Ubuntu 22.04) - 2x Synology DSM / uname -a 4.4.302+ x86_64 GNU/Linux
- Newt Version: 1.9.0 in Docker
To Reproduce
I don't know precisely how I did that. Just as switching between working and not working DNS state within my network it does happen 2x on both machines.
Expected Behavior
Will reconnect automatically, not stall and cosumming all CPU resources.