[helm] pods stuck in init using custom PostgreSQL and Elasticsearch
Closed this issue · 1 comments
nodesocket commented
Using my own Elasticsearch and PostgreSQL. So in values.yaml
I am setting:
...
data:
memory: 256M
database:
# if empty {{- .Release.Name -}}-postgresql will be used
host: postgres-master.default.svc.cluster.local
port: 5432
tls: false
name: fusionauth
user: localhost
password: localhost
root:
user: localhost
password: localhost
elasticsearch:
host: elasticsearch-master.default.svc.cluster.local
port: 9200
...
elasticsearch:
enabled: false
postgresql:
enabled: false
But the pods are just stuck on:
fusionauth-6c4745bfd9-6jgth 0/1 Init:0/2 0 16m
fusionauth-6c4745bfd9-r48k2 0/1 Init:0/2 0 16m
fusionauth-6c4745bfd9-sjl72 0/1 Init:0/2 0 16m
Looking at the pods ENVARS I see:
Environment:
DATABASE_USER: localhost
DATABASE_PASSWORD: localhost
DATABASE_ROOT_PASSWORD: localhost
DATABASE_ROOT_USER: localhost
DATABASE_URL: jdbc:postgresql://fusionauth-postgresql:5432/fusionauth
FUSIONAUTH_SEARCH_SERVERS: http://elasticsearch-master:9200
FUSIONAUTH_MEMORY: 256M
Is that right? I would have expected to see the custom hosts in ENVARs
drpebcak commented
@nodesocket can you try this with the latest version of the chart from here: https://github.com/FusionAuth/charts
Some things have changed, so you will have to modify your values, but it should work.