hassio-addons/addon-influxdb

influxdb does not start after install

buuenke opened this issue · 2 comments

Problem/Motivation

influxdb not starting

Expected behavior

to start well

Actual behavior

hooks up in log file with => Starting NGINX...

Add-on: InfluxDB
Scalable datastore for metrics, events, and real-time analytics

Add-on version: 5.0.0
You are running the latest version of this add-on.
System: Home Assistant OS 12.3 (aarch64 / raspberrypi4-64)
Home Assistant Core: 2024.5.2
Home Assistant Supervisor: 2024.05.1

Please, share the above information when looking for help
or support in, e.g., GitHub, forums or the Discord chat.

s6-rc: info: service base-addon-banner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service base-addon-timezone: starting
s6-rc: info: service base-addon-log-level: starting
s6-rc: info: service fix-attrs successfully started
[19:10:17] INFO: Configuring timezone (Europe/Amsterdam)...
s6-rc: info: service base-addon-log-level successfully started
s6-rc: info: service base-addon-timezone successfully started
s6-rc: info: service legacy-cont-init: starting
cont-init: info: running /etc/cont-init.d/create-users.sh
[19:10:21] INFO: InfluxDB init process in progress...
[19:10:26] INFO: InfluxDB init process in progress...
[19:10:31] INFO: InfluxDB init process in progress...
[tcp] 2024/05/16 19:10:37 tcp.Mux: Listener at 127.0.0.1:8088 failed failed to accept a connection, closing all listeners - accept tcp 127.0.0.1:8088: use of closed network connection
cont-init: info: /etc/cont-init.d/create-users.sh exited 0
cont-init: info: running /etc/cont-init.d/influxdb.sh
cont-init: info: /etc/cont-init.d/influxdb.sh exited 0
cont-init: info: running /etc/cont-init.d/kapacitor.sh
cont-init: info: /etc/cont-init.d/kapacitor.sh exited 0
cont-init: info: running /etc/cont-init.d/nginx.sh
cont-init: info: /etc/cont-init.d/nginx.sh exited 0
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun chronograf (no readiness notification)
services-up: info: copying legacy longrun influxdb (no readiness notification)
services-up: info: copying legacy longrun kapacitor (no readiness notification)
services-up: info: copying legacy longrun nginx (no readiness notification)
[19:10:39] INFO: Kapacitor is waiting until InfluxDB is available...
s6-rc: info: service legacy-services successfully started
[19:10:39] INFO: Chronograf is waiting until InfluxDB is available...
[19:10:39] INFO: Starting the InfluxDB...
[19:10:54] INFO: Starting the Kapacitor
[19:10:54] INFO: Starting Chronograf...

'##:::'##::::'###::::'########:::::'###:::::'######::'####:'########::'#######::'########::
##::'##::::'## ##::: ##.... ##:::'## ##:::'##... ##:. ##::... ##..::'##.... ##: ##.... ##:
2024/05/16 19:10:55 Using configuration at: /etc/kapacitor/kapacitor.conf
##:'##::::'##:. ##:: ##:::: ##::'##:. ##:: ##:::..::: ##::::: ##:::: ##:::: ##: ##:::: ##:
#####::::'##:::. ##: ########::'##:::. ##: ##:::::::: ##::::: ##:::: ##:::: ##: ########::
##. ##::: #########: ##.....::: #########: ##:::::::: ##::::: ##:::: ##:::: ##: ##.. ##:::
##:. ##:: ##.... ##: ##:::::::: ##.... ##: ##::: ##:: ##::::: ##:::: ##:::: ##: ##::. ##::
##::. ##: ##:::: ##: ##:::::::: ##:::: ##:. ######::'####:::: ##::::. #######:: ##:::. ##:
..::::..::..:::::..::..:::::::::..:::::..:::......:::....:::::..::::::.......:::..:::::..::

time="2024-05-16T19:11:05+02:00" level=info msg="Reporting usage stats" component=usage freq=24h reporting_addr="https://usage.influxdata.com" stats="os,arch,version,cluster_id,uptime"
time="2024-05-16T19:11:05+02:00" level=info msg="Serving chronograf at http://127.0.0.1:8889" component=server
[19:11:06] INFO: Starting NGINX...

I have exactly the same problem, receiving same errors in the log

There hasn't been any activity on this issue recently, so we clean up some of the older and inactive issues.
Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by leaving a comment 👍
This issue has now been marked as stale and will be closed if no further activity occurs. Thanks!