ManiMatter/decluttarr

[Feature request] Option to disable deletion during a certain timeframe in the docker container

Closed this issue · 16 comments

Due to volume limitions put by my isp I limit my download speed to 1 KiB/s in qBitTorrent (essentially disabling downloads) from 17:00 till 00:00.

During this timeframe I've noticed Radarr/Sonarr thinks these torrents are stalled and that would cause them to be deleted.
Having the option to disable deletion during a custom timeframe could be useful.

Have you considered setting up two crone jobs that simply stop decluttarr container for the night and restarts it in the morning?

Has my suggestion worked for you?

Ah sorry.
Can be closed. Decided not to use this as using this will get me banned from private trackers because I can't remove using import categories.

You did see that you can turn off decluttarr for private trackers? There‘s an option for that

"These will continue to be removed (since considered broken): Failed, Files Missing" I think this can still cause issues with hit and run requirements

Are you sure?
My take on these was that when two errors occur qbit stops any activity on these torrents; it‘s non-recoverably broken and also won‘t upload.
And therefore would not affect your upload stats if removed.

if my understanding is wrong, then happy to also exclude those for private trackers; its a 1 line change, but unless i‘m wrong i thought it makes no sense

I'd rather them not be touched at all. I've had random torrents get the missing files error. In this case I just rechecked them so they would continue downloading again. I'm mainly afraid of said torrents being flagged as hit and run. But I could be wrong as well.

Ok so you had missing files-torrents that recovered. Happy to exclude those for private trackers.

what about failed? Have any recovered?

I actually haven't seen any of them failed so not sure.

I just added the protection across the board. Would you mind pulling the "dev" version and test out whether this would work for you?

Do you have a docker tag for this by any chance?

So in the meantime I've tried to find a solution for the above schedule issue. Had to make some changes to my keep alive script so I could exclude some containers.

Combined with the ofelia container I can manage decluttarr like so:

  decluttarr:
    image: ghcr.io/manimatter/decluttarr:latest
    container_name: mediacenter-decluttarr
    labels:
      sels.homebrewdotnet.nokeepalive: "true"
    depends_on:
      - radarr
      - sonarr
      - lidarr
    network_mode: service:vpn-client
    environment:
      - PUID=1700
      - PGID=1700
      - TZ=Europe/Brussels
      - REMOVE_TIMER=10
      - REMOVE_FAILED=True
      - REMOVE_METADATA_MISSING=True
      - REMOVE_MISSING_FILES=True     
      - REMOVE_ORPHANS=False
      - REMOVE_SLOW=false
      - REMOVE_STALLED=True
      - REMOVE_UNMONITORED=True
      - PERMITTED_ATTEMPTS=6
      - NO_STALLED_REMOVAL_QBIT_TAG=~type_PrivateTracker
      - IGNORE_PRIVATE_TRACKERS=True
      - RADARR_URL=http://127.0.0.1:7878
      - RADARR_KEY=secret
      - SONARR_URL=http://127.0.0.1:8989
      - SONARR_KEY=secret
      - LIDARR_URL=http://127.0.0.1:8686
      - LIDARR_KEY=secret
      - QBITTORRENT_URL=http://127.0.0.1:6882
      - QBITTORRENT_USERNAME=secret
      - QBITTORRENT_PASSWORD=secret
    restart: no

  decluttarr-lifecycle-manager:
    image: mcuadros/ofelia:latest
    container_name: mediacenter-decluttarr-lifecycle-manager
    depends_on:
     - decluttarr
    network_mode: service:vpn-client
    command: daemon --docker
    labels:
      ofelia.job-run.declutarr_stop.schedule: "0 * 16-23 * * *"
      ofelia.job-run.declutarr_stop.command: "docker stop mediacenter-decluttarr"
      ofelia.job-run.declutarr_stop.image: "docker:latest"
      ofelia.job-run.declutarr_stop.volume: "/var/run/docker.sock:/var/run/docker.sock:ro"
      ofelia.job-run.declutarr_stop.no-overlap: "true"
      ofelia.job-run.declutarr_start.schedule: "0 */15 1-15 * * *"
      ofelia.job-run.declutarr_start.command: "docker start mediacenter-decluttarr"
      ofelia.job-run.declutarr_start.image: "docker:latest"
      ofelia.job-run.declutarr_start.volume: "/var/run/docker.sock:/var/run/docker.sock:ro"
      ofelia.job-run.declutarr_start.no-overlap: "true"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    restart: unless-stopped

So for me this issue can be closed.

Do you have a docker tag for this by any chance?

Yep, ‚dev‘ tag instead of latest ;-)

Whoops was looking in docker hub were I couldn't see the tag.
Pulled the dev version.
Will keep it running for a few days and I'll report back.

So far I haven't been banned / tagged for hit and run which is good news.

I did notice this in my logs though:

[WARNING]: >>> Queue cleaning failed on formattedQueueInfo. (File: shared.py / Line: 128 / Error Message: 'downloadId' / Error Type: <class 'KeyError'>)
[WARNING]: >>> Queue cleaning failed on formattedQueueInfo. (File: shared.py / Line: 128 / Error Message: 'downloadId' / Error Type: <class 'KeyError'>)
[INFO]: >>> Detected stalled download (2 out of 4 permitted times): Stoker.2013.Bluray.1080p.DTS-HD.x264-Grym
[WARNING]: >>> Queue cleaning failed on formattedQueueInfo. (File: shared.py / Line: 128 / Error Message: 'downloadId' / Error Type: <class 'KeyError'>)
[WARNING]: >>> Queue cleaning failed on formattedQueueInfo. (File: shared.py / Line: 128 / Error Message: 'downloadId' / Error Type: <class 'KeyError'>)
[WARNING]: >>> Queue cleaning failed on Radarr. (File: remove_unmonitored.py / Line: 26 / Error Message: 'downloadId' / Error Type: <class 'KeyError'>)
[WARNING]: >>> Queue cleaning failed on formattedQueueInfo. (File: shared.py / Line: 128 / Error Message: 'downloadId' / Error Type: <class 'KeyError'>)

Also I'm curious how does declutarr detect private trackers? I have 3 torrents from public trackers that are stuck in downloading metadata but I don't see them in the logs.

Also I'm curious how does declutarr detect private trackers?
Private torrents have a tag "is_private"

On your errors - could you please create a separate issue and share the full logs?
I find it interesting that all these errors above seem to have a problem with the key 'downloadId'.. which is supposed to be there.