rakshasa/rtorrent

Tracker: [Couldn't resolve host name]

Closed this issue · 17 comments

Everytime I restart rtorrent I get the following error on every torrent that I am seeding.

Tracker: [Couldn't resolve host name]

Once I delete the Session data, the torrent rechecks and works fine until I reboot again. When I am seeding a 100 Torrents that is a lot of work to do everytime I reboot.

This happens when rtorrent is not compiled with libcurl that uses c-ares. It happens when a lot of torrents attempt to contact the tracker at the same time, for example after a restart of the client. When you rehash the data, the torrents start at different times, causing it to work.

I have the same issue. I've been using rtorrent-0.9.6 for ages on Slackware and had roughly around 400 torrents loaded in there. All of them were added individually so I never encountered the "Couldn't resolve host name" error message as the connections to the trackers were always more or less at a different time. Last week my Linux machine did an unclean shutdown after which rtorrent was throwing out this error for every torrent.

I've always thought that curl with --enable-ares was mainly to solve the temporary freeze in rtorrent when you add several torrents at once, though it has been suggested to fix the "Couldn't resolve host name" error as well.

Reading up on this error message, I installed c-ares via Slackbuilds, compiled and install curl with --enable-ares and then recompiled libtorrent-0.13.6 and rtorrent-0.9.6 with curl. If I run curl-config --features I can see AsynchDNS is there so this should solve the problem.

However, once I delete all session data (as a test) and added about 100 torrents at once in my watch folder, the error message showed up again. If I delete all session data and add torrents in groups of 5 or 10, it largely works, but sometimes it fails and then it starts showing the error eventually for all torrents once they contact the trackers again.

I'm honestly not sure what to check anymore. My DNS resolving is working fine as I use Google's DNS servers and can resolve domains without problems. But even with libcurl with --enable-ares, it seems to still throw out this error. I've considered going into the source code and see whether I can add some slight delays between connections to the tracker when rtorrent noticed a huge amount of torrents were added at the same time, but maybe there is another fix?

Let me know if I can provide any information to help you help me/us. :-)

This is in no way an official workaround, but try starting nscd (systemctl start nscd).

It attempts to query that, and finding it seems to resolve the issue on my system (without c-ares).

The real solution would be to add a little jitter to interval values, which would also be a lot more tracker-friendly. Real random jitter, but then that would need my PR on libtorrent to be merged…

Trippler, thanks for the suggestion. I've tried using nscd but unless I recompile everything and disable the use of c-ares, it's not used by rtorrent.

As I never had this issue in the past (mainly cause I fed the torrents 1 at a time spread over several days/weeks/months), I've just created a script that adds each torrent file in a directory to my watch folder with a fixed delay of several minutes. If this fails, I will randomize the delay and see if that works.

If all else fails, I can only hope the recommendation of pyroscope will be adopted. I really don't want to switch torrent clients. :-)

Perhaps a message could be output talking about nscd or c-ares needing to be restarted/installed?

The nscd seems to have helped, but hasn't cleared up the issue 100%

Figured out my issues I didn't have enough open files on shell user. Changed it to 8192 from 1024 and now it all works. Since the open files also takes into account open sockets. So the torrents couldn't reach the tracker due to unable to create a connection (socket).

added LimitNOFILE=8192 to my systemctl config file for rtorrent
/etc/systemd/system/rtorrent.service

[Unit]
Description=rTorrent
After=network.target

[Service]
LimitNOFILE=8192
Type=forking
KillMode=none
User=rtorrent
Group=media
#ExecStartPre=/usr/bin/bash -c "if test -e %h/.rtorrent_session/rtorrent.lock && test -z pidof rtorrent; then rm -f %h/.rtorrent_session/rtorrent.lock; fi"
ExecStart=/usr/bin/screen -dmfa -S rtorrent /usr/bin/rtorrent
ExecStop=/usr/bin/bash -c "test pidof rtorrent && killall -w -s 2 /usr/bin/rtorrent"
WorkingDirectory=%h
Restart=on-failure

[Install]
WantedBy=multi-user.target

Wow, great feedback Steve! I've been playing around with rtorrent instances over the last 2 weeks and was about to post an update as well. I have refrained from setting up nameserver caching etc and simply limited the amount of torrents I used in one rtorrent instance. Also, by lowering the amount of connections, upload slots and bringing the max open files to a more reasonable amount spread over 2 to 3 rtorrent instances, all torrents are up and running again.

You might really be on to something with the link of files and sockets. :-)

Since the open files also takes into account open sockets. So the torrents couldn't reach the tracker due to unable to create a connection (socket).

Hmm. That's interesting.
What's your related config settings? I have these and I never had issues:
usually all the 999 files are open all the time and used sockets are fluctuating.

I am using for the most part the Template Config.

#############################################################################
# A minimal rTorrent configuration that provides the basic features
# you want to have in addition to the built-in defaults.
#
# See https://github.com/rakshasa/rtorrent/wiki/CONFIG-Template
# for an up-to-date version.
#############################################################################

# Instance layout (base paths)
method.insert = cfg.basedir, private|const|string, (cat,"/data/")
method.insert = cfg.watch,   private|const|string, (cat,(cfg.basedir),"TorrentFiles/")
method.insert = cfg.logs,    private|const|string, (cat,(cfg.basedir),"log/")
method.insert = cfg.logfile, private|const|string, (cat,(cfg.logs),"rtorrent-",(system.time),".log")

# Create instance directories
#execute.throw = bash, -c, (cat,\
#    "builtin cd \"", (cfg.basedir), "\" ",\
#    "&& mkdir -p .session download log watch/{load,start}")

throttle.global_down.max_rate.set_kb = 0
throttle.global_up.max_rate.set_kb   = 0
throttle.max_downloads.global.set = 300
throttle.max_uploads.global.set   = 300

# Listening port for incoming peer traffic (fixed; you can also randomize it)
network.port_range.set = 50000-50000
network.port_random.set = no

# Tracker-less torrent and UDP tracker support
# (conservative settings for 'private' trackers, change for 'public')
dht.mode.set = disable
protocol.pex.set = no
trackers.use_udp.set = no

# Peer settings
throttle.min_peers.normal.set = 1
throttle.max_peers.normal.set = 60
throttle.min_peers.seed.set = 1
throttle.max_peers.seed.set = 80

# Limits for file handle resources, this is optimized for
# an `ulimit` of 1024 (a common default). You MUST leave
# a ceiling of handles reserved for rTorrent's internal needs!
network.http.max_open.set = 100
network.max_open_files.set = 1200
network.max_open_sockets.set = 600

# Memory resource usage (increase if you have a large number of items loaded,
# and/or the available resources to spend)
pieces.memory.max.set = 1800M
network.xmlrpc.size_limit.set = 2M

# Basic operational settings (no need to change these)
session.path.set = (cat, (cfg.basedir), ".session/")
directory.default.set = (cat, (cfg.basedir), "download/")

# Watch directories (add more as you like, but use unique schedule names)
schedule2 = watch_start, 10, 10, ((load.start, (cat, (cfg.watch), "start/*.torrent")))
schedule2 = watch_load, 11, 10, ((load.normal, (cat, (cfg.watch), "load/*.torrent")))

scgi_port = 127.0.0.1:5000


# Logging:
#   Levels = critical error warn notice info debug
#   Groups = connection_* dht_* peer_* rpc_* storage_* thread_* tracker_* torrent_*
print = (cat, "Logging to ", (cfg.logfile))
log.open_file = "log", (cfg.logfile)
log.add_output = "info", "log"
#log.add_output = "tracker_debug", "log"

### END of rtorrent.rc ###

I think that was the problem, it's over the system default 1024 :
network.max_open_files.set = 1200

Setting open files to 65535 on the system seems to have solved it for me as well.

@chros73 His systemd unit allows 8k handles.

@pyroscope That doesn't mean the system itself will allow it though. 1024 is the system default limit on my system as well. Can be seen with 'ulimit -a' and changed in /etc/security/limits.conf. The systemd limit I imagine sets a soft-limit, which further limits what ever was set.

systemd does not care a bit about your limits.conf. Remember, world domination!

jfoor commented

I realize this is old but I found this thread after beating my head against the wall for an hour or two trying to figure this out and I want you all to know I appreciate your fixes!!