hassio-addons/addon-nginx-proxy-manager

β„Ή Updated to v1: Proxy no longer works, cannot login, doesn't work, lost all settings

frenck opened this issue Β· 116 comments

⚠️ Please do not respond to this issue with "I have the same issue" or similar. Thanks πŸ‘

Oh hi there πŸ‘‹

If you came to this issue because you are experiencing what is in the title, it probably means you:

  • Have automatic updates enabled on the Nginx Proxy Manager add-on; causing you to be automatically upgraded into this breaking change release without you knowing.
  • You have manually upgraded and didn't check the release notes.

This issue is here as we expected this to happen, to prevent everybody from creating the same issue this issue is pre-created with information on what is going on.

"What happened?"

I've shipped version 1 of the add-on, which has a huge breaking change:

It starts with a blank slate. You have to set up / configure the add-on from scratch.

This is why this version of the add-on has a major version bump and a big breaking change warning in the add-on release notes.

"Damn Frenck, this sucks. Why!?! 🀬"

So, this add-on hasn't received much love in the past two years. It had multiple issues:

  • An external database was required (MariaDB), which caused overhead.
  • The external database connection wasn't always working stable.
  • Let's encrypt not always working / hard to fix/debug.
  • Backups weren't complete. SSL certificates & the database weren't included in the backups.
  • Restoring the MariaDB backups hasn't been reliable, making it even harder to backup this add-on correctly.
  • The add-on was built at a time when the Nginx Proxy Manager didn't support anything else but MySQL/MariaDB.
  • The add-on was built using a lot of workarounds, to make it work as an add-on.
  • The Nginx Proxy Manager version shipped had a security issue (cvedetails.com/cve/CVE-2023-27224).

Version v1 and the newer version of this add-on addresses all of the above.

"Why didn't you migrate the data?!"

I wish I could. I've spent endless hours making that happen reliably and tried many times in the past two years. I never reached an even remotely acceptable point of making that happen.

The time has come to bite the bullet. This is why this is published as a breaking change with a major version number.

"Can I now use this add-on without the MariaDB add-on?"

Yes, the add-on will now use SQLite and no longer rely on the MariaDB add-on installed. Once upgraded and you've re-setup your proxy, it means you can remove the MariaDB add-on (if it was only used for the Nginx Proxy Manager).

"Does the backup now include everything?"

Yes, as of version 1+, backups of the Nginx Proxy Manager add-on are complete backups. This includes all configuration data and certificates.

"I don't have time for this bulls***"

Understandable. It would have helped if you read the actual release notes before upgrading. If you missed it, please restore the backup of the add-on that you made before upgrading.

Please note: Home Assistant doesn't provide a downgrade mechanism, so restoring a backup is the only correct solution.

"I don't agree with this change"

That is possible. While this change has the best intentions and, in my opinion, is the only way forward: we don't have to agree.

In that case, this add-on is no longer suitable for you. The best I can advise at this point is uninstalling it and look for a solution that suits you better.

"I have more questions!"

Please, feel free to drop them in this issue. I do want to ask you to keep the discussion here on topic, polite, and civilized. We are all just humans.

Final word

First of all: I'm deeply sorry I have to take this path that forces you to restart your proxy configuration.

Nevertheless, I do think this is the only correct way forward to resolve all issues around this add-on at this point. I hope in time you will agree.

Please accept my sincere apologies.

../Frenck

⚠️ Please do not respond to this issue with "I have the same issue" or similar. Thanks πŸ‘

I fully agree, that sometimes a full and clean install is a good choice... but currently it is not running:

image

I have the same log as @DanielMisch. Did a full reboot just in case, and same behavior.

Thanks, @DanielMisch @wrenoud. I'm investigating that issue right now. Some reports on Discord as well. Will report back once there is an update on this.

../Frenck

OK, so I've been able to reproduce this in the latest version, but haven't experienced this at all in testing & development.

The problem seems to be that NPM is able to generate, write or read invalid JWT tokens to disk? After it tries to load those keys, it fails.

Might be node-env related.

I've pulled v1.0.0 from the add-on to prevent more people running into it, until I've found a solution.

Update: I found the issue. It was caused by yours truly in the last patch I made to fix certbot plugins. While it did fix that, it does cause issues on fresh setups.

The good news is that it is an easy fix that can be shipped easily. No additional manual interference is needed from you.

I'll update this issue once I've put out a new version (v1.0.1).

a very big thanks to frenck!!

I've published v1.0.1 to address the reported starting issue caused by an uncaught SyntaxError.

https://github.com/hassio-addons/addon-nginx-proxy-manager/releases/tag/v1.0.1

I've hidden the comments above related to that report, to keep this issue as clean as possible.

Thanks for reporting πŸ‘

../Frenck

What to do when you can't access the web ui to reconfigure it?

chrome_yIs0nlOqG1.mp4

What to do when you can't access the web ui to reconfigure it?

Odd behavior, some kind of browser plugin/protection thing?
Anyways, visit https://192.168.1.10:81 in your case (the address of your HA instance, port 81, or the port you have configured in the Configuration tab -> network settings).

../Frenck

Good idea. I had disabled such port in the past. That's why the button didn't work. I reconfigured it to 81 and now even the button works. Thanks!

No issues here, waited to upgrade after I saw these comments, had 1.0.0 already pushed to me. Now upgraded perfectly to 1.0.1 from 0.28-something and reconfigured everything. Thanks for your work!

Oow and don't beat yourself up too much on this breaking change in the first place and the typo in 1.0.0. This stuff just happens in IT. And there is no shame in making mistakes, part of the process. You're delivering excellent quality, really appreciated.

Seeing the new features, I believe this change worth it, thanks a lot for the nice addon, updated and reconfigured with no issues, always read the changelogs folks!

@frenck Keep up the good work!

In case you find yourself in the position of having to re-create a lot of proxy host configs, I wrote the following quick script to dump a JSON array of your existing host configs from the MariaDB setup. Run this from the old container, save the result somewhere, and you can use the info to help re-create the new config:

JSON_FILE="/data/manager/production.json"
DB_HOST=$(jq -r '.database.host' $JSON_FILE)
DB_NAME=$(jq -r '.database.name' $JSON_FILE)
DB_USER=$(jq -r '.database.user' $JSON_FILE)
DB_PASS=$(jq -r '.database.password' $JSON_FILE)
DB_PORT=$(jq -r '.database.port' $JSON_FILE)
mysqldump -h $DB_HOST -u $DB_USER -p$DB_PASS -P $DB_PORT $DB_NAME proxy_host --skip-add-drop-table --skip-add-locks --no-create-info --compact

@seancmalloy This seems to be caused by the way certbot is installed in the add-on. Could you open up a separate issue for tracking? Thanks!

../Frenck

Will do. i only have a handful of proxy records to recreate no big deal. some others will find aderusha's scripts very useful. thanks. you guys rock.

Run this from the old container, save the result somewhere, and you can use the info to help re-create the new config:

This makes no sense. Don't send people into supervisor-managed containers. There are many people in this eco system that don't have experience with that. I have hidden your post for that exact reason.

In case you want to glance at your old configuration, just use the phpMyAdmin add-on to view the tables in the database. Same result, no command-line magic, no need to keep an old container around either.

It's a poor excuse. You knew that many users will pull this data-wipe (a.k.a update) automatically (by any standarts it's a stable and maintained releases policy). Because devs won't ask for help to migrate simple structured data? I peeked inside, IMHO, most DB admins migrate harder chunks routinely. I had more love for this addon with all the flaws, and an addon just silently wiping your data one day leaving no logical choice but trash it.

Thanks, @SergeyPomelov. Unfortunately, it ain't a simple database migration. If the only roadblock was a database move, yeah, sure, I agree.

You knew that many users will pull this data-wipe (a.k.a update) automatically

I certainly hope not. IMHO, enabling that option is utterly stupid. I strongly discourage auto-updating. I always have. Regardless of this add-on, any update may have unexpected changes, that it will happily auto update into.

Please note you decide to enable such update features, not me.

I certainly hope not. IMHO, enabling that option is utterly stupid. I strongly discourage auto-updating. I always have. Regardless of this add-on, any update may have unexpected changes, that it will happily auto update into.

Please note you decide to enable such update features, not me.

When I started with HA back in late 2020, auto-updates were enabled by default for all add-ons. After a massive issue due to auto-updating one addon I disabled it for all my addons - reading release notes, carefully planning updates is the right update strategy (for me). Just saying because if the default for a fresh HA setup still is "auto-updates enabled by default"... rant incoming.

2 cents over, I really appreciate the communication of this update, really outstanding. Thx Frenck!

I just wanted to say thank you, i've read the release notes, took the time to plan it, made notes of my old config, upgraded, re-added all of my 20 proxy hosts, re-added the Let's Encrypt DNS Challenge, and it all works like a glove again. That all was about 10-15 Minutes of work, i spent humongous amounts of time longer configuring all of this for the first time. 😁

I really appreciate being honest about why things don't work and what has to be done to resolve that. Keep it up @frenck!

I have auto updates enabled and was able to restore from a backup without issue. Resetting up my config took about 20 minutes but I am glad this addon is now stand alone. Automatic migration would have been nice but totally understand the complications and amount of work making it not worth it.

It's a poor excuse. You knew that many users will pull this data-wipe (a.k.a update) automatically (by any standarts it's a stable and maintained releases policy).

I strongly disagree. Home Assistant's auto-update feature, while useful for some add-ons, is dangerous to enable for critical services. If you'd like to help out, I'd recommend submitting a PR to HA that would enable add-on developers to flag updates as breaking and prevent auto-updating to that version. That will solve the issue for all add-ons.

Also of note, your language came across to me as overly harsh. Please remember that this ecosystem is built on free, open-source software, and maintained by volunteers.

If you'd like to help out, I'd recommend submitting a PR to HA that would enable add-on developers to flag updates as breaking and prevent auto-updating to that version. That will solve the issue for all add-ons.

HA can infer breaking changes automatically when an add-on that follows semver bumps its major version. Maybe add-ons can opt-into semver parsing through some config.yml key like versioning: semver, though.

This potential problem has caused me to disable the auto updates on EVERYTHING!! As far as I'm concerned auto update can be removed across the board. The potential of breaking unexpectedly at the most inopportune time is too great.

I can't believe there isn't a better way to release a noncompatible release than the "surprise, it's broke" without notice method. This topic, and warning, should be widely published.

haven't upgraded yet, but was wondering about the background. Thanks for the good communication! :)

I'm using the addon because of the limited knowledge on this topic. If I currently have a subdomain with ssl (e.g. example.domain.com) can I still use 'example' as domain for a new certificate or should I change the subdomain?

So it appears I've accidentally killed myself here by using automatic updates. All config is lost and I have a lot of work to do to get it back - thus is life.

QUESTION: I have a rather old backup composed of a bunch of .conf files - can I use these somehow to restore configuration? If so, what's the best mechanism?

Thank you, and thanks for all the efforts on making this a better product, regardless of the growing pains we may suffer!

Best,

George

@george-viaud

In summary no, the way the application works is that it populates the Database (which previously was MariaDB) then exports the data out to NGINX conf files. It's a one way sync, unfortunately you will need to recreate the config, you can of course use the conf files as a reference.

Glad I saw this in my notifications and screen printed everything I needed to ... (not a lot). If notification is aware of the update, surely auto-update would have already run? I've also turned off my auto-updates as I'm happier to have some traceability.
I've never really used MariaDB, so presume my instance of that is due to this. I've stopped that, disabled watchdog and start on boot. Will see what goes wrong. So far nothing.

On topic but much broader, it would be nice if auto-updates was version-limitable similar to how composer works, so you can specify versions like 1.. for upgrade (minor versions only for example)

I tried to restore ngnix-proxy-manager and mariadb from my backups, but it does not restore my proxies. Did I miss a step or do you have any suggestions? Thanks.

I see that it restored 0.12.3, but is not using mariadb

[22:18:08] INFO: Starting NGinx...
[1/13/2024] [10:18:09 PM] [Global   ] β€Ί β„Ή  info      No valid environment variables for database provided, using default SQLite file '/data/database.sqlite'

Just restoring the partial backup of the add-on is normally enough. You did restore as it seems (as it tries to use the "/data/" folder.

It seems, you have run in one of the beautiful edge cases, where you end up magically with SQLite in the old version; I seen many report of these, never been able to reproduce.

You said you have restored MariaDB as well (was not really needed), did it come up correctly? Have you tried accessing it using phpMyAdmin to check if all data is there?

../Frenck

@frenck Thanks for the reply. Not sure why, but restoring the partial backup was not working for me. Anyways, after your comment, I just used the phpMyAdmin add-on to browse the mariadb proxy table to recreate proxies after re-updating to NPM-1.0.1. It was easy enough once I knew where to look.

Reading this before updating. I feel like I'm gonna be in the minority on this one.

My current configuration includes self-hosted CA-signed SSL certificates. I notice this is no longer supported after I update. Is there a way to reconfigure my add-on to restore this support?

The auto-update is enabled by default AFAIK. Anyway, this isn't a reason to commit breaking changes if devs use delivery channels which are supposed to deliver stable and maintained versions. Every non beta labeled auto update - must, however, as I understood here, for HASS this isn't as solid as I stated. It is still on the devs responsibility side, not users, IMHO. About "utterly stupid to enable", you can put a remark to off it before installing your add-ons as a free fix ("experimental" badge or "you use HASS addons" aren't clear enough messages for a data wipe via a default delivery feature).

I updated and am able to make sub domains like: nas.domain.nl with a SSL.
But in the past I had one *.domain.nl registration.
and by every proxy host i configured I choose that *.domain.nl certificate
It worked. but now I can't registrate a wildcard transip.nl domain with ssl cert anymore.
I converted back to the old addon for the time being, is it possible that I pick the 3 needed files from there (don't know how yet) and registrate them as custom again?
I only got the message cannot find key. and when clicking on custom I don't know which files. the tutorial in the past I cannot find it anymore.

Or do I have to registrate my 17 subdomains with every time getting a new own ssl certificate?

Not related to the bug. As a developer I have never seen a data mathematically/technically impossible to migrate if you can go offline. Very interested. @frenck Could you fill my experience gap (email raw problematic data copypaste, code pointer, maybe)?

My current configuration includes self-hosted CA-signed SSL certificates. I notice this is no longer supported after I update. Is there a way to reconfigure my add-on to restore this support?

It is supported, place your custom SSL certificate in the custom_ssl folder that is in the addons_config share of this add-on.

The auto-update is enabled by default AFAIK.

It is not.

s if devs use delivery channels which are supposed to deliver stable and maintained versions.

This is my personal project I do in my spare time. There are no "devs" as in plural.

Every single one of my add-ons has a project stage badge shown on the index of every single addons. This add-on is marked experimental, not stable.

But in the past I had one *.domain.nl registration.
and by every proxy host i configured I choose that *.domain.nl certificate

@remb0 That should work, it does require the use of the DNS challenge.

Did you use that?

There should be a flag for breaking changes like that to prevent auto-updates.

There should be a flag for breaking changes like that to prevent auto-updates.

Yes, please πŸ™ I would always set it, on all updates, even patches. Just to ensure no-one ever uses auto-update again.

Edit:

Hehe I noticed the downvote (@DanielVoogsgerd), understandable, let me add a little more context to this comment to explain where this comes from.

Small updates may have an unexpectedly large impact. For example, shipping some small bug fix, that turns out to be critical breaking for many use cases; regression is not an uncommon thing. As a maintainer of an add-on auto-update sets you out of options.

Jank/Pull/revert the release? Too darn late, everything, already updated into a steaming pile of junk fully automagical.

I have done many thousands of add-on releases since 2018. I test & care, almost all have been without issues πŸ™ , but I'm also human. If I had the ability for my add-ons, yes, I would make auto-updating impossible. No doubt in my mind.

This happens to me after I try to add SSL CERTIFICATES:
1: Trying it while Im creating the host: INTERNAL ERROR
2: Trying it manually from SSL CERTIFICATES Tab:

Error: Command failed: certbot certonly --config "/etc/letsencrypt.ini" --work-dir "/tmp/letsencrypt-lib" --logs-dir "/tmp/letsencrypt-log" --cert-name "npm-13" --agree-tos --authenticator webroot --email "xxxxxx@mail.com" --preferred-challenges "dns,http" --domains "xxxxxxxxxx.duckdns.org"
Saving debug log to /tmp/letsencrypt-log/letsencrypt.log
Some challenges have failed.
Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /tmp/letsencrypt-log/letsencrypt.log or re-run Certbot with -v for more details.

at ChildProcess.exithandler (node:child_process:422:12)
at ChildProcess.emit (node:events:517:28)
at maybeClose (node:internal/child_process:1098:16)
at ChildProcess._handle.onexit (node:internal/child_process:303:5)

I disagree with @frenck's point of view here, as shown by the downvote, but since it is a little off-topic I was going to let it at that. However, since I got mentioned, I will share a single comment on why I disagree.

Let me first say that I do realise that auto-updates carry an inherent risk, a risk I try to mitigate to one extent by using backups for example, and what risk is left behind for that I gladly take on. I run a fairly decent amount of plugins, all of them with auto update enabled. After years of no problems whatsoever, this is the first time an auto-update "backfired". I don't blame you for the update path, I appreciate the work you put into developing this and other plugins.

Small updates may have an unexpectedly large impact. For example, shipping some small bug fix, that turns out to be critical breaking for many use cases; regression is not an uncommon thing. As a maintainer of an add-on auto-update sets you out of options.

This is where I mostly disagree. Your statement here is entirely correct, that could definitely happen, but there is a big difference between a change we know runs into backwards incompatibility and a change that could run into some unforeseen regression.
Differentiating between those cases could let the user reduce (not eliminate) the risk of updating into a backwards incompatibility; helping the people that decide that auto-updating is a net positive for them, as they reduce their maintenance burden.

Again, thanks for the plugin, and with or without auto-updates, I will gladly continue to use it in the future.

I did the upgrade.
Reconfig Proxy settings.
I succesfully reach login page.
But then I get "Unable to connect to Home Assistant. Retrying ..."

But then I get "Unable to connect to Home Assistant. Retrying ..."

Hmm did you check the box to enable websockets on that specific host?

I missed this.
It fine now.
Thanx for your prompt reply, and for your work also.

Hi, thanks for your great job.
I upgraded to version 1.0.1. After logged and created the admin user I'm not able to log in anymore. The login page error is "No Relevant User Found". I tried to reinstall the add on but same error when trying to login with admin@exlample.com / changeme. How can I fix the problem? For now I restored the previous version.

hi guys! due the last update in nginx .. need to know where to put my cert in the custom certificates.. thanks!imageimage

Try cert2.pem as Certificate.
And privkey2.pem as Certificate Key

I was lucky enough to spot the v1.0.0 version number and dug into the release notes out of curiousity, otherwise it would have auto updated.

@frenck as a suggestion for the next time something like this happens: Have a version in between that is there for some time (~3 months?) This version could then prepare users to disable auto update for a particular addon.

I see it as pointless and somewhat irritating to rant against the auto-update feature. I find it very useful and get lots of small automatic updates to random addons that are non critical for me. Turns out NGINX Proxy Manager is one of the most critical addons, which is why I had my own automation to update over night to not disrupt any traffic.

While the feature is there, people will use it. It might be worth introducing a semver like auto updating mechanism or letting devs set a flag.

Human mistakes that can happen in patch releases with a full wipe such as this update can not be compared.

I somehow messed up with the 1.0.1 install, so I uninstalled and reinstalled it again. But with this seemingly fresh install, it give me the following error, and just stalls.

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service base-addon-banner: starting


Add-on: Nginx Proxy Manager
Manage Nginx proxy hosts with a simple, powerful interface

Add-on version: 1.0.1
You are running the latest version of this add-on.
System: Debian GNU/Linux 12 (bookworm) (amd64 / qemux86-64)
Home Assistant Core: 2024.1.3
Home Assistant Supervisor: 2024.01.0.dev0201

Please, share the above information when looking for help
or support in, e.g., GitHub, forums or the Discord chat.

s6-rc: info: service base-addon-banner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service base-addon-log-level: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service base-addon-log-level successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service init-npm: starting
s6-rc: info: service init-nginx: starting
s6-rc: info: service init-npm successfully started
s6-rc: info: service npm: starting
s6-rc: info: service npm successfully started
s6-rc: info: service init-nginx successfully started
s6-rc: info: service nginx: starting
[08:10:46] INFO: Starting the Manager...
s6-rc: info: service nginx successfully started
s6-rc: info: service legacy-services: starting
[08:10:46] INFO: Starting NGinx...
nginx: [warn] protocol options redefined for 0.0.0.0:443 in /config/nginx/proxy_host/3.conf:14
s6-rc: info: service legacy-services successfully started
nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/npm-6/fullchain.pem": BIO_new_file() failed (SSL: error:80000002:system library::No such file or directory:calling fopen(/etc/letsencrypt/live/npm-6/fullchain.pem, r) error:10000080:BIO routines::no such file)
[08:10:46] INFO: Service NGINX exited with code 1 (by signal 0)
s6-rc: info: service legacy-services: stopping
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service nginx: stopping
s6-rc: info: service nginx successfully stopped
s6-rc: info: service init-nginx: stopping
s6-rc: info: service npm: stopping
s6-rc: info: service init-nginx successfully stopped
[08:10:46] INFO: Service Nginx Proxy Manager exited with code 256 (by signal 15)
s6-rc: info: service npm successfully stopped
s6-rc: info: service init-npm: stopping
s6-rc: info: service init-npm successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service base-addon-log-level: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service base-addon-log-level successfully stopped
s6-rc: info: service base-addon-banner: stopping
s6-rc: info: service base-addon-banner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped

Is there some corrupt config file left somewhere?
I tried Beta 1.0.1 and it works fine.

The auto-update is enabled by default AFAIK.

It is not.

s if devs use delivery channels which are supposed to deliver stable and maintained versions.

This is my personal project I do in my spare time. There are no "devs" as in plural.

Every single one of my add-ons has a project stage badge shown on the index of every single addons. This add-on is marked experimental, not stable.

I really appreciate the transparency, I was one of the people that had auto update on, so I never had the chance to see the warning. but even with the experimental flag I don't think this was the most responsible way of doing it.

Having auto updates ON I something risky yes, but it's also a way to help systems to keep secured.

I managed the rollback the update from a backup, because auto updates make backups automatically thankfully.

Just some constructive feedback on some ways I would handle this case when there was not an upgrade path available.

One would be to deprecate this project and clone it as new one, push an update and disclose the deprecation on the change log so people know to do their own updates manually later.

Another one would be to push an update that would spit out LOGs and push notifications to HASS (I think its something adding can do if I remember correctly) about the deprecation, and just like before give warnings on the change log about the upgrade path in the future, give users about a month so most users can have a chance of handling it.

Something that I think should be proposed to the HASS devs is to disable or at least warn users about enabling auto update on experimental packages.

And the last thing, even if this is personal project or a hobby, you should aim to give the best experience possible, both so people can trust your craft even more and as a way to improve your skills, specially if you know your project is deployed to a decent number of people.

But overall it's still a great update, it's great not having to depend on Maria DB anymore. Best wishes.

@frenck as a suggestion for the next time something like this happens: Have a version in between that is there for some time (~3 months?) This version could then prepare users to disable auto update for a particular addon.

I see it as pointless and somewhat irritating to rant against the auto-update feature. I find it very useful and get lots of small automatic updates to random addons that are non critical for me.

I believe the ultimate solution lies in a compromise.

The auto-update feature is awesome for add-ons with dedicated developers that can get plenty of testing through avenues like beta channels (ex: ESPHome). But @frenck can't be expected to maintain a bunch of beta add-ons, release schedules and data migrations for something that is ultimately a hobby. However, stripping the auto-update capability entirely for an add-on will just lead to hacky workarounds and more confusion.

I feel the best solution would be allowing a flag in the config.yaml manifest where the author can define the last non-breaking version. If an update comes out with auto-update enabled and the current version is less than this "require confirmation" version, then a notification is thrown and the auto-update is aborted until a user manually issues the update. Maybe even allow a string or update_warning.md to describe the nature of the breaking change on the notif/add-on page.

This isn't a discussion topic, please feel free to continue your conversations at https://github.com/hassio-addons/addon-nginx-proxy-manager/discussions

At this point, I have to ask... what is the advantage of using this addon over the native docker method? https://nginxproxymanager.com/guide/#quick-setup

There is seemingly a breaking change in every release here. The last one I remember was you couldn't login to the interface, and had to do a very hacky restore of the user to get back in.

I think breaking changes are somewhat acceptable, provided there is way for the user to manually migrate. I see no discussion of how to migrate existing setups. In my opinion this type of change is unacceptable. There could have been a minor update, which sent a notification to HA with a warning and giving people ample time to prepare, before pushing the breaking update...

I read all comments here, and didn't see a way to restore... so I'm providing instructions here for those who don't live in HA 24/7.

  1. Go to Settings/System/Backups
    image

image

image

  1. Search for nginx and sort by the created date, selecting the newest backup (click where the X is)
    image

  2. Check both boxes and click restore
    image

  3. Go disable automatic updates!!!

Hi, someone can please help me? I got the "No Relevant User Found" error...
thanks in advance

Hi, thanks for your great job. I upgraded to version 1.0.1. After logged and created the admin user I'm not able to log in anymore. The login page error is "No Relevant User Found". I tried to reinstall the add on but same error when trying to login with admin@exlample.com / changeme. How can I fix the problem? For now I restored the previous version.

scorbacella - the admin email is misspelled (@exlample instead of @example). Not sure if that's only a typo here or when trying to login as well.

For the record, I read the release notes, which did not tell me how to check my configuration. I hadn't thought about this project in years, so I couldn't remember what I had done. I performed the upgrade, and nginx broke. I restored a backup, did research to find out how to log back into the proxy manager, and discovered there was no configuration to copy or back up to the new version, either manually or by looking in the UI.

Eventually I found this issue which tells me that I'm having the problem because I didn't read the release notes, which I did. So now I feel rather blamed for this work I've had foisted on me, and neither this issue or the release notes actually addresses the problem I'm having.

Since there's no config for me to copy over, and no additional information in this issue or the release notes, as far as I can tell this add-on is permanently broken for me.

Since I do not appreciate being blamed for things that are not my fault, or having work foisted on me just to keep systems exactly as they were, I will just remove this add-on from my HomeAssistant instance and configure nginx myself. I encourage everyone who is capable of editing that configuration to do so.

If anybody needs help doing this, shoot me a DM. It's not that hard, in fact, it's easier than re-learning the UI for this project every three years, and is useful in other contexts.

Have been postponing this, but finally did and it went as smooth as i hoped, thank you so much for all your good hard work πŸ‘ I assume it safe to delete the /ssl/nginxproxymanager/ folder after the update, is that a correct assumption?

aoeu commented

@frenck I'm not one of your users nor am I effected by this change, but I'm in a channel where others are talking about it and have read the issue. I trust you didn't mean it to be, but this is pretty condescending language to your users: "Understandable. It would have helped if you read the actual release notes before upgrading. If you missed it, please restore the backup of the add-on that you made before upgrading."

It reads like victim blaming - "It would have helped if you read the actual release notes before upgrading" isn't going to help anybody feel better who didn't (and who may be frustrated or stressed out at the moment).

A more empathetic way to communicate to your users would read something like:
"Understandable. I tried to warn about this in the release notes, and I didn't have a better way to reach everyone before the upgraded. I am hoping everyone made a backup before upgrading." It gets the same info across - that you did try to warn everybody, that they could do a rollback - but without directly telling user it is their fault for not reading the fine-print or making a backup.

I know you were not trying to be mean, but please be kinder, software is hard enough.

I gotta say I disagree. If we are talking about voluntary work, I believe no one has any obligation to be kind.

I also think it sucks having your system broken by an upgrade, but again, if we are talking about voluntary work, and one does not like it, just make a better one then.

Surely I try to be as kind as possible, being voluntary or not, but I would not feel comfortable demanding someone who is already voluntarily writing a very useful piece of software for me to be more kind.

That said: yeah, your suggested message would be nicer for sure.

Hello,
Did the following and all went well. However, I have one inquiry, Will the certificates (all from Let's Encrypt) be automatically renewed once they expire?

1- Downloaded my certificates
2- Updated NGINX Proxy Manager from 0.12.x to 1.0.1
3- Add the my downloaded certificates as 'Custom'
4- Manually added all my proxy hosts

Thanks to everyone putting any effort for the others :)
RS

Worked great after making note of my existing configuration. The only issue I had was not noting the extra configs I added to get ESPHome log streaming working so I thought I would share that here for anyone else running in to that issue.

Custom locations

location: /api/hassio_ingress
schema: http
forward hostname/ip: [local IP of HA]
forward port: 8123
custom configuration:

proxy_set_header Host $http_host;

If we are talking about voluntary work, I believe no one has any obligation to be kind.

Well, @felipecrs, @frenck appears to be employed by HomeAssisstant, so we're not talking about voluntary work, and I do believe this change affects paying HomeAssisstant Cloud customers so the hostility in the release notes is probably a bad business decision.

Since I do not pay for HomeAssistant Cloud and I manage my HomeAssistantOS box myself, I'm essentially a beta tester. If I report that this bug is terrible, effected me, and that there are no remediation steps provided in the release notes I was supposed to read, and HomeAssisstant Inc wants to ignore my feedback (and imply that I can't read), that's not my problem.

But it does make me less likely to keep helping them test their product, and I definitely won't recommend it, at least not without telling this story!

@gigawhitlocks This is my personal project, which I maintain in my spare free time. It is not supported, maintained, or funded by Nabu Casa.

Hence this is also not part of the Home Assistant org on GitHub.

So, yes, it is voluntary.

In that case it sounds like they should be paying you for this work since it's critical to their product @frenck ; I'm sorry that they are not compensating you sufficiently that you can spend your full time on this. Unfortunately I cannot help but feel you're being taken advantage of by Nabu Casa if they do not pay you for this work. That's the last I'll say on this thread.

I'm sorry that they are not compensating you sufficiently that you can spend your full time on this. Unfortunately I cannot help but you're being taken advantage of by Nabu Casa if they do not pay you for this work.

That comment makes no sense at all. I work on the Home Assistant project full-time as my day job. This is not the Home Assistant project.

I will end this conversation with you on this as well. As this is going nowhere and is far from productive or helpful for anyone. As a matter of fact, it is highly demotivating for something I like to do as a hobby.

If you want to approach this hostile, that is fine, go do that somewhere else, don't bother me with that shit. Thanks πŸ‘

../Frenck

I am not trying to be hostile, just trying to explain why the Release Notes were not helpful, I'm sorry

just trying to explain why the Release Notes were not helpful, I'm sorry

Nothing in the last replies have been towards that, not even close.

They told me that I would only find this issue if I didn't read them, which I did, and the very vague instructions were not helpful to me and I still have the original issue related to this ticket

@gigawhitlocks After your last stunt: I don't care what you have. I'm not motivated to do anything to help you. Instead, I think I'll watch a movie tonight. This is my own time.

../Frenck

I was trying to help YOU. I don't need your help. I know how to configure nginx. I'm sorry that I have failed to communicate anything to you in a way you can understand. Goodbye.

At this point, I have to ask... what is the advantage of using this addon over the native docker method?

@ReenigneArcher 🀷 The goal is to have as little difference as possible; while maintaining backup options from Home Assistant natively, providing an installer/ability to run it without Docker knowledge.

If you are more comfortable with Docker, please use that πŸ‘

There is seemingly a breaking change in every release here.

As far as I aware, this is the first big breaking change on this add-on?

I think breaking changes are somewhat acceptable, provided there is way for the user to manually migrate. I see no discussion of how to migrate existing setups.

There, as in the topic starter, no migration path is possible. Sorry!

I read all comments here, and didn't see a way to restore... so I'm providing instructions here for those who don't live in HA 24/7.

Thanks! Although I most be honest that I consider that a bit of common knowledge for the average HA user. That might be helpful for some!

../Frenck

Have been postponing this, but finally did and it went as smooth as i hoped, thank you so much for all your good hard work πŸ‘ I assume it safe to delete the /ssl/nginxproxymanager/ folder after the update, is that a correct assumption?

@hulkhaugen Yes, that assumption is correct πŸ‘

../Frenck

but this is pretty condescending language to your users: "Understandable. It would have helped if you read the actual release notes before upgrading. If you missed it, please restore the backup of the add-on that you made before upgrading."

It reads like victim blaming - "It would have helped if you read the actual release notes before upgrading" isn't going to help anybody feel better who didn't (and who may be frustrated or stressed out at the moment).

@aoey Sorry, I don't share that sentiment. It isn't blaming anyone, it just lists a possible path you ended up here. If you don't read what you install (automatically or not), it might come as a surprise. This topic aims to prevent floods of issues. It has no emotional charge for me in that sense, nor do I intend to load it emotionally.

but without directly telling user it is their fault for not reading the fine-print or making a backup

I did not comment or say that. I don't want to add synthetic sugar.

I know you were not trying to be mean, but please be kinder, software is hard enough.

It is, you are looking at a result from the other end πŸ˜‰

../Frenck

Hello, Did the following and all went well. However, I have one inquiry, Will the certificates (all from Let's Encrypt) be automatically renewed once they expire?

@RASBR Yes they will automatically renew πŸ‘

Hello, Did the following and all went well. However, I have one inquiry, Will the certificates (all from Let's Encrypt) be automatically renewed once they expire?

@RASBR Yes they will automatically renew πŸ‘

@frenck Thank you very much. That's a relief

Thank you so much for updating this add-on, another extra device eliminated!

Great thanx @frenck. The update was smooth and fast. The longest part of it was the decision making process - breaking changes are a bit scary anytime.

This happens to me after I try to add SSL CERTIFICATES: 1: Trying it while Im creating the host: INTERNAL ERROR 2: Trying it manually from SSL CERTIFICATES Tab:

Error: Command failed: certbot certonly --config "/etc/letsencrypt.ini" --work-dir "/tmp/letsencrypt-lib" --logs-dir "/tmp/letsencrypt-log" --cert-name "npm-13" --agree-tos --authenticator webroot --email "xxxxxx@mail.com" --preferred-challenges "dns,http" --domains "xxxxxxxxxx.duckdns.org" Saving debug log to /tmp/letsencrypt-log/letsencrypt.log Some challenges have failed. Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /tmp/letsencrypt-log/letsencrypt.log or re-run Certbot with -v for more details.

at ChildProcess.exithandler (node:child_process:422:12)
at ChildProcess.emit (node:events:517:28)
at maybeClose (node:internal/child_process:1098:16)
at ChildProcess._handle.onexit (node:internal/child_process:303:5)

@Jotasct I had the same issue and at first I added the previously downloaded certificate from old version of NPM as custom one in the "SSL Certificates" panel (it worked) but I think that automatic renew is not activated because this certificat is no more flagged as 'let's encrypt' one.
I secondly discovered that my port nΒ°80 was not opened on my router so I opened it and redirected it to my HA instance like port 443 and BOUM: the generation of the certificat worked !

This happens to me after I try to add SSL CERTIFICATES: 1: Trying it while Im creating the host: INTERNAL ERROR 2: Trying it manually from SSL CERTIFICATES Tab:
Error: Command failed: certbot certonly --config "/etc/letsencrypt.ini" --work-dir "/tmp/letsencrypt-lib" --logs-dir "/tmp/letsencrypt-log" --cert-name "npm-13" --agree-tos --authenticator webroot --email "xxxxxx@mail.com" --preferred-challenges "dns,http" --domains "xxxxxxxxxx.duckdns.org" Saving debug log to /tmp/letsencrypt-log/letsencrypt.log Some challenges have failed. Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /tmp/letsencrypt-log/letsencrypt.log or re-run Certbot with -v for more details.

at ChildProcess.exithandler (node:child_process:422:12)
at ChildProcess.emit (node:events:517:28)
at maybeClose (node:internal/child_process:1098:16)
at ChildProcess._handle.onexit (node:internal/child_process:303:5)

@Jotasct I had the same issue and at first I added the previously downloaded certificate from old version of NPM as custom one in the "SSL Certificates" panel (it worked) but I think that automatic renew is not activated because this certificat is no more flagged as 'let's encrypt' one. I secondly discovered that my port nΒ°80 was not opened on my router so I opened it and redirected it to my HA instance like port 443 and BOUM: the generation of the certificat worked !

I have the same and wondering which files and locations you get them from.
you have to add 3 files but i'm not sure which :P

Hello, Did the following and all went well. However, I have one inquiry, Will the certificates (all from Let's Encrypt) be automatically renewed once they expire?

1- Downloaded my certificates 2- Updated NGINX Proxy Manager from 0.12.x to 1.0.1 3- Add the my downloaded certificates as 'Custom' 4- Manually added all my proxy hosts

Thanks to everyone putting any effort for the others :) RS

Can you say which files you downloaded from which location? I will try updating tomorrow and will add my current wildcard domain certificate and setup my proxy hosts again.

Hello, Did the following and all went well. However, I have one inquiry, Will the certificates (all from Let's Encrypt) be automatically renewed once they expire?

@RASBR Yes they will automatically renew πŸ‘

I dug into NPM's sources and I believe the answer is actually no.

The renewal code only runs on certificates with provider set to letsencrypt.

3- Add the my downloaded certificates as 'Custom'

The "Custom" upload form sets provider to other.

And when you think about it, adding a Let's Encrypt certificate even for the HTTP challenge requires entering at least an email. When you're uploading a custom certificate, there's no field for it, so renewal shouldn't be possible and probably isn't.

The easiest path forward is probably requesting new certificates from LE as if you're setting up from scratch.
No hate towards @frenck, there seems to be no obvious documentation explaining this.

cc @baldisos @HarlemSquirrel (tagging the other upvoters of that answer on the off-chance that'll save them from their custom certs expiring)

How would you expect the addon to renew a manually uploaded certificate, it's just a file. It will renew anything that it has requested and manages.

it's just a file

(off-topic) It's a "curse of knowledge" situation. If you know what's in it, yeah. If you don't, it's not completely outlandish to assume that renewal information might be included in there somewhere. In fact, it's entirely possible to implement downloading of renewal parameters into a file and importing it back in NPM, it's just that nobody has done it yet (the usefulness of that feature is probably too limited to bother).

Hello, Did the following and all went well. However, I have one inquiry, Will the certificates (all from Let's Encrypt) be automatically renewed once they expire?

@RASBR Yes they will automatically renew πŸ‘

I dug into NPM's sources and I believe the answer is actually no.

The renewal code only runs on certificates with provider set to letsencrypt.

3- Add the my downloaded certificates as 'Custom'

The "Custom" upload form sets provider to other.

And when you think about it, adding a Let's Encrypt certificate even for the HTTP challenge requires entering at least an email. When you're uploading a custom certificate, there's no field for it, so renewal shouldn't be possible and probably isn't.

The easiest path forward is probably requesting new certificates from LE as if you're setting up from scratch. No hate towards @frenck, there seems to be no obvious documentation explaining this.

cc @baldisos @HarlemSquirrel (tagging the other upvoters of that answer on the off-chance that'll save them from their custom certs expiring)

Hi @D-side,
Before adding my certificates as 'Custom' I tried to issue new ones the same way I did it before using the Challenge for GoDaddy but it always gave me an error. BTW the same error happens with NPM on another Docker host after updating the container.

but it always gave me an error

@RASBR please gather some more details, e. g. what the error says exactly and what the logs say in the process.
Also, it's probably unrelated to this particular issue and should be discussed elsewhere.

Can all the logging be places in the addon folder?
other question I had always my certs in /ssl/
I created a new one but where is it located now? how to retrieve that location?

This happens to me after I try to add SSL CERTIFICATES: 1: Trying it while Im creating the host: INTERNAL ERROR 2: Trying it manually from SSL CERTIFICATES Tab:
Error: Command failed: certbot certonly --config "/etc/letsencrypt.ini" --work-dir "/tmp/letsencrypt-lib" --logs-dir "/tmp/letsencrypt-log" --cert-name "npm-13" --agree-tos --authenticator webroot --email "xxxxxx@mail.com" --preferred-challenges "dns,http" --domains "xxxxxxxxxx.duckdns.org" Saving debug log to /tmp/letsencrypt-log/letsencrypt.log Some challenges have failed. Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /tmp/letsencrypt-log/letsencrypt.log or re-run Certbot with -v for more details.

at ChildProcess.exithandler (node:child_process:422:12)
at ChildProcess.emit (node:events:517:28)
at maybeClose (node:internal/child_process:1098:16)
at ChildProcess._handle.onexit (node:internal/child_process:303:5)

@Jotasct I had the same issue and at first I added the previously downloaded certificate from old version of NPM as custom one in the "SSL Certificates" panel (it worked) but I think that automatic renew is not activated because this certificat is no more flagged as 'let's encrypt' one. I secondly discovered that my port nΒ°80 was not opened on my router so I opened it and redirected it to my HA instance like port 443 and BOUM: the generation of the certificat worked !

Thx! I did it and it worked but as you said, looks like renew is not automatic. How would you renew this certificates?

Hi, migrated today with 48 hosts:

  1. If you get error as shown few coments before on *.duckdns.org challenge... just wait a moment and try again few more times, have no idea what was the issue in the first place but suddenly without any interference it started working on its own and now it works reliably. Error handling in the NPM's UI is just plain horrible (user really want's to know the exit signal was emitted 🀣)and also the retries probably should be configurable, but just want to let you know this is an option, give it a try if you don't want to start digging deeper, it might just help 😊
  2. Not sure where I've seen it but someone/somewhere claimed the wildcard certificate is not working correctly, it's working just fine
  3. You can do the "migration" by using like 15 lines of JS on the browser frontend, definitely not going to share anything because it is already a heated discussion and I really want to be kept outside of it. But it's what I've done, it would be crazy for me to migrate around 50 hosts by hand.

How would you renew this certificates?

@Jotasct you have to request new ones, most likely. If you're having issues with that, it's probably a "general question", not about this issue, and should be asked in the general NPM discussion spaces.

I'd been worrying about this for a couple of weeks, with the 'if it ain't broke' mantra winning the internal debate! Bit the bullet yesterday though and spent an hour documenting all of my existing settings (19 proxy hosts with Letsencrypt SSL) and then did the update today and recreated everything, which probably took another hour. Everything worked as expected though, and no issues, so thanks to all involved.

It's a shame you're so arrogant and so dismissive of the community. You broke this, and don't go gaslighting your community by saying "Read release notes".
A breaking change should be blocked in auto-updates, you should have resolved this before pushing a breaking change. Some of us struggle with redoing config and find it extremely frustrating. It would be best if you had fixed this by putting something in to block auto updates, but instead, you put it on us.
This is not the first time this user comes second attitude has caused issues.

@Techknowledgeman keep it respectful. There is no place for that type of discussion here or anywhere on GitHub... especially not from someone who has never contributed anything to any project.

@Techknowledgeman keep it respectful. There is no place for that type of discussion here or anywhere on GitHub... especially not from someone who has never contributed anything to any project.

who are you to tell me how to behave? You have no idea what I have contributed, this is not my only github account.

What I state is known fact, he should learn his lesson and be less of a dick to people and be more focussed on user experience.

Breaking updates should be blocked in auto updates, not blame the user for not checking.

Is the option to use a proper DBMS removed? I can't seem to find a spot to re-configure MariaDB.

Is the option to use a proper DBMS removed? I can't seem to find a spot to re-configure MariaDB.

No need, the add-on now uses a built-in SQLite database.

Alright, After digging further I'm afraid I have to disagree that a breaking change was needed at least without a maintenance release. I don't have time to recreate my entire NPM config so here's what I did:

Copied ALL data dirs in use by NPM, including /etc/letsencrypt
Create a backup/copy of the NPM database
Launch NPM using Mysql with docker/docker-compose

Here's my volumes config for my new container

      - /mnt/data/supervisor/addon_configs/custom/npm/data:/data
      - /mnt/data/supervisor/addon_configs/custom/npm/letsencrypt:/etc/letsencrypt
      - /mnt/data/supervisor/ssl:/ssl

IMO the forced change to SQLite was completely unnecessary or a new addon should have been created with this one only receiving a NPM version bump.

Sorry, this will be a longie, but will hopefully tie up most loose ends that others have brought up into something conclusive.

(edit: if you'd like a tl;dr version: the change is for the better, and though it could be done without this extensive damage, now it's too late for that, we can only learn from this and move on)


Frankly, this notion of "proper DBMS" amuses me. "Proper" is a completely arbitrary descriptor that merely indicates someone's (usually speaker's) approval, and that approval is quite often just a dogma they learned in a different context which may be irrelevant here. Sure, SQLite may be a different type of DBMS compared to MariaDB, in that it's embeddable rather than a network service. Sure, it's not designed for highly concurrent loads.

But NPM has no need for any of it. In the vast majority of setups its database merely records information manually put in by a single administrator (no concurrency!) and automated renewals of dozens of hosts tops every 60 days (90d TTL - 30d lead time?) each. It's a nigh-nonexistent load by "enterprise-grade DBMS" standards that powerhouses like MariaDB aim for. Sure, both can happen simultaneously, but that loses, what, milliseconds per cert renew on lock contention at worst? And it has no use for network access, since in normal operation the only connection to that database would be from just one instance of the app.

And it keeps the whole thing in a few files most of the time, which you can easily back up at any time along with the rest of the files comprising application state. If you want to back up the app state, you already have to back up its files, which in case of NPM would be certificates and certbot configurations. Using SQLite allows the rest of the data to rely on the same backup mechanism as well. It makes properly supporting operation of the app simpler without making it function any worse.


Still, I agree: this addon could be phased out into archived/unmaintained state and a new one with SQLite version created in its place. Nobody would have their existing setups destroyed (their existence is unfortunate, but all too likely to just dismiss), and setting up one addon or another just like it from scratch doesn't make any difference.

But the damage here is already done. At this point we can only learn and move on.

  • For some it's going to be turning off unattended addon updates.
  • For some, never turning on unattended updates for anything still in versions 0.x to begin with, as these can break in any conceivable way on their way to 1.0 (it's considered common enough knowledge by many software people to omit, but for the rest, generally popular Semantic Versioning states that in writing in 4th point). Still, even minor upgrades above 1.0 can end horribly (I bet many in this thread have stories they can share, I have one as recent as from a few months ago).
  • For some, paying more attention to their backup routines, so that when things break like this there's at least a quick recovery plan.
  • For some it might even be motivation to improve the addon infrastructure to make cases like this less likely to happen in the future.
  • For some, migrating NPM to be a user-managed container rather than an addon (sure, it's not recommended to run those alongside the supervisor, but people still do it).
  • Some might just not upgrade, per the adage "if it ain't broke, don't fix it". (Which is dangerous, as the specifics of "broke" keep changing over time, as issues in just about everything rise to the surface.)

…not necessarily just one of these and maybe even something not on the list.

Please pick yours and if you don't have anything new and relevant to add to this discussion, please consider if the comment you intend to write will help anyone. (And this isn't a roundabout way of saying "don't", I mean exactly what these words say, and I'm sad that I even have to emphasize this.)