Docker container don't start after update from v2.12.6 --> v2.13.1
Opened this issue · 12 comments
Checklist
- Have you pulled and found the error with
jc21/nginx-proxy-manager:latestdocker image?- Yes
- Are you sure you're not using someone else's docker image?
- Yes
- Have you searched for similar issues (both open and closed)?
- Yes
Describe the bug
After the update (docker pull) application don't start
I see error
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: open sysctl net.ipv4.ip_unprivileged_port_start file: reopen fd 8: permission denied: unknown
Nginx Proxy Manager Version
2.13.1
To Reproduce
Steps to reproduce the behavior:
1 pull newst docker image
2 docker compose up
3 see the error
Expected behavior
Application start
Screenshots
Operating System
Docker is installed on LXC container that is hosted on Proxmox
Additional context
Installation was done over 2 years ago and work without any issue
I use LXC container installed on PROXMOX host in that LXC container(Ubuntu OS) i install docker and then pull NPM
same issue for me...im running basically the same environment. Ubuntu LXC container on proxmox with docker inside of it. Using docker compose creates the same error. Tried to prune container, image, network but no success.
I realized that some folders inside the working directory was owned by root instead of the user who is used for running the container. Tried chown -R on that working dir to make all folders owned by the user, but the issue didnt changed.
weird is, the update to 2.13.0 did work fine...no issues. this happened by updating from 2.13.0 to 2.13.1, at least for me...
Same for me. I am running Debian 13 as an LXC in Proxmox
ok, did a bit of troubleshooting. it seems this is not related to nginx proxymanager. i have multiple LXC containers setup the same way for different docker images. these do not work anymore as well...for me it looks like the auto update to "docker-ce/noble 5:28.5.2-1ubuntu.24.04noble" is the root cause for this...did a restore of the LXC container with "docker-ce/noble 5:28.5.1-1ubuntu.24.04noble" and nginx proxymanager runs fine with version 2.13.1
ok, its for sure docker version 5:28.5.2 what is causing the issue. found another discussion in another project with the same error: immich-app/immich#23644
This is due containerd.io package (1.7.28-2), just downgrade to 1.7.28-1 and everything will work again.
apt install containerd.io=1.7.28-1debian.12bookworm
Edit:
They released containerd.io v 1.7.29-1, but the problem persists ;)
you're right, its sufficient to downgrade only containerd.io
I have the same issue, however although I do run it in a Proxmox LXC, my base OS for my LXC is ArchLinux with podman 5.6.2.
I first tried to roll back to tag 2.12.6, but that didn't make any difference.
I restored a 3-day old backup of my container and everything worked just fine (still ArchLinux with podman 5.6.2).
I then upgraded the container to 2.13.1. The container starts and there are no errors in the logs, yet it won't accept any connections on port 80 and 443, or 81 (management interface for that matter).
❯ Starting backend ...
❯ Starting nginx ...
[11/6/2025] [3:18:30 PM] [Global ] › ℹ info Using Sqlite: /data/database.sqlite
[11/6/2025] [3:18:30 PM] [Migrate ] › ℹ info Current database version: none
[11/6/2025] [3:18:30 PM] [Setup ] › ℹ info Logrotate Timer initialized
[11/6/2025] [3:18:30 PM] [Setup ] › ℹ info Logrotate completed.
[11/6/2025] [3:18:31 PM] [Global ] › ℹ info IP Ranges fetch is enabled
[11/6/2025] [3:18:31 PM] [IP Ranges] › ℹ info Fetching IP Ranges from online services...
[11/6/2025] [3:18:31 PM] [IP Ranges] › ℹ info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json
[11/6/2025] [3:18:31 PM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v4
[11/6/2025] [3:18:32 PM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v6
[11/6/2025] [3:18:32 PM] [SSL ] › ℹ info Let's Encrypt Renewal Timer initialized
[11/6/2025] [3:18:32 PM] [SSL ] › ℹ info Renewing SSL certs expiring within 30 days ...
[11/6/2025] [3:18:32 PM] [IP Ranges] › ℹ info IP Ranges Renewal Timer initialized
[11/6/2025] [3:18:32 PM] [Global ] › ℹ info Backend PID 185 listening on port 3000 ...
[11/6/2025] [3:18:32 PM] [SSL ] › ℹ info Completed SSL cert renew process
I reverted again and upgraded to 2.13.0, just to see if it would break there and to supply as much feedback on the issue as I could. And that version has the same issue as I stated above; no errors in the logs, but unresponsive on all ports (refused connection). Weird thing is that reverting to 2.12.6 AFTER upgrading to 2.13.x doesn't fix the issue either. Like something in the container data (perhaps permissions, database schema, whatever) is changed that is causing this issue.
Because I have similar issues and I'm not using containerd, I really think a change in 2.13.x is the root cause of this issue and if you use containerd, the downgrade is just a workaround, not a solution. There is no difference in packages on my system between the 3-day old backup and my current system, so I think that rules out any OS packages as the root cause.
So I rolled back my LXC container once more, pinned my npm container on version 2.12.6 to prevent it from breaking again and disabled podman auto-update just for good measure.
have the same error