[Bug] MARIADB_ROOT_HOST environment variable is not overridden.
Opened this issue ยท 14 comments
Documentation
- I acknowledge that I have read the relevant documentation.
Describe the bug
MARIADB_ROOT_HOST environment variable is not overridden.
I have to restrict access to my root account for security reasons.
The MARIADB_ROOT_HOST variable has been redefined and is still defined as root@%
Expected behaviour
It should be defined as root@10.200.0.0/255.248.0.0 rather than root@%.
Steps to reproduce the bug
- define mariadb yaml
---
apiVersion: k8s.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb-galera
spec:
rootPasswordSecretKeyRef:
name: db-sc
key: mariadb-root-password
storage:
size: 100Gi
storageClassName: ceph
replicas: 3
maxScale:
enabled: false
galera:
enabled: true
primary:
podIndex: 0
automaticFailover: true
sst: mariabackup
availableWhenDonor: false
galeraLibPath: /usr/lib/galera/libgalera_smm.so
replicaThreads: 4
providerOptions:
gcs.fc_limit: '64'
agent:
image: ghcr.io/mariadb-operator/mariadb-operator:v0.0.28
port: 5555
kubernetesAuth:
enabled: false
gracefulShutdownTimeout: 5s
recovery:
enabled: true
minClusterSize: 50%
clusterMonitorInterval: 10s
clusterHealthyTimeout: 30s
clusterBootstrapTimeout: 10m
podRecoveryTimeout: 3m
podSyncTimeout: 3m
initContainer:
image: ghcr.io/mariadb-operator/mariadb-operator:v0.0.28
config:
reuseStorageVolume: true
podSecurityContext:
runAsUser: 0
volumes:
- name: localtime
hostPath:
path: /etc/localtime
- name: tls
secret:
secretName: mariadb-galera-tls
volumeMounts:
- name: localtime
mountPath: /etc/localtime
readOnly: true
- name: tls
mountPath: /certs
service:
type: ClusterIP
primaryService:
type: ClusterIP
secondaryService:
type: ClusterIP
tolerations:
- key: "k8s.mariadb.com/ha"
operator: "Exists"
effect: "NoSchedule"
podDisruptionBudget:
maxUnavailable: 33%
updateStrategy:
type: RollingUpdate
livenessProbe:
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 5
readinessProbe:
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 5
metrics:
enabled: false
env:
- name: MARIADB_ROOT_HOST
value: "10.200.0.0/255.248.0.0"
myCnf: |
[mariadb]
bind-address=*
default_storage_engine=InnoDB
binlog_format=row
innodb_autoinc_lock_mode=2
max_allowed_packet=256M
ignore-db-dirs=lost+found
lower_case_table_names=1
wait_timeout=3600
interactive_timeout=3600
max_connections=2000
open_files_limit=65535
collation_server=utf8_unicode_ci
init_connect='SET NAMES utf8'
character_set_server=utf8
slow_query_log=1
slow_query_log_file=/dev/null
general_log=1
general_log_file=/dev/null
ssl_key=/certs/tls.key
ssl_cert=/certs/tls.crt
ssl_ca=/certs/ca.crt
#require_secure_transport=on
tls_version=TLSv1.2,TLSv1.3
[client]
default_character_set=utf8
- pod describe
$ sudo kubectl describe po -n cloudpc mariadb-galera-0 |less
...
Containers:
mariadb:
...
Environment:
MYSQL_TCP_PORT: 3306
MARIADB_ROOT_HOST: %
MYSQL_INITDB_SKIP_TZINFO: 1
CLUSTER_NAME: cluster.local
POD_NAME: mariadb-galera-0 (v1:metadata.name)
POD_NAMESPACE: test (v1:metadata.namespace)
POD_IP: (v1:status.podIP)
MARIADB_NAME: mariadb-galera
MARIADB_ROOT_PASSWORD: <set to the key 'mariadb-root-password' in secret 'db-sc'> Optional: false
MARIADB_ROOT_HOST: 10.200.0.0/255.248.0.0
...
- MARIADB_ROOT_HOST environment variable does not override but exists in duplicate
Debug information
- Related object events:
$ sudo kubectl logs -n cloudpc mariadb-galera-0 -c agent
{"level":"info","ts":1715081845.125925,"msg":"Starting agent"}
{"level":"info","ts":1715081845.1277168,"logger":"server","msg":"server listening","addr":":5555"}
{"level":"error","ts":1715081867.1708312,"logger":"handler.probe.liveness","msg":"error getting SQL client","error":"Error 1045 (28000): Access denied for user 'root'@'::1' (using password: YES)","stacktrace":"github.com/mariadb-operator/mariadb-operator/pkg/galera/agent/handler.(*Probe).Liveness\n\t/app/pkg/galera/agent/handler/probe.go:67\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2166\ngithub.com/go-chi/chi/v5.(*Mux).routeHTTP\n\t/go/pkg/mod/github.com/go-chi/chi/v5@v5.0.12/mux.go:459\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2166\ngithub.com/go-chi/chi/v5/middleware.Recoverer.func1\n\t/go/pkg/mod/github.com/go-chi/chi/v5@v5.0.12/middleware/recoverer.go:45\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2166\ngithub.com/go-chi/chi/v5/middleware.(*Compressor).Handler-fm.(*Compressor).Handler.func1\n\t/go/pkg/mod/github.com/go-chi/chi/v5@v5.0.12/middleware/compress.go:209\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2166\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\t/go/pkg/mod/github.com/go-chi/chi/v5@v5.0.12/mux.go:90\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:3137\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:2039"}
{"level":"error","ts":1715081867.1709232,"logger":"handler.probe.readiness","msg":"error getting SQL client","error":"Error 1045 (28000): Access denied for user 'root'@'::1' (using password: YES)","stacktrace":"github.com/mariadb-operator/mariadb-operator/pkg/galera/agent/handler.(*Probe).Readiness\n\t/app/pkg/galera/agent/handler/probe.go:117\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2166\ngithub.com/go-chi/chi/v5.(*Mux).routeHTTP\n\t/go/pkg/mod/github.com/go-chi/chi/v5@v5.0.12/mux.go:459\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2166\ngithub.com/go-chi/chi/v5/middleware.Recoverer.func1\n\t/go/pkg/mod/github.com/go-chi/chi/v5@v5.0.12/middleware/recoverer.go:45\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2166\ngithub.com/go-chi/chi/v5/middleware.(*Compressor).Handler-fm.(*Compressor).Handler.func1\n\t/go/pkg/mod/github.com/go-chi/chi/v5@v5.0.12/middleware/compress.go:209\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2166\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\t/go/pkg/mod/github.com/go-chi/chi/v5@v5.0.12/mux.go:90\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:3137\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:2039"}
$ sudo kubectl get po -n cloudpc |grep mariadb-galera
mariadb-galera-0 1/2 CrashLoopBackOff 7 (17s ago) 10m
mariadb-galera-1 0/2 Init:0/1 0 10m
mariadb-galera-2 0/2 Init:0/1 0 10m
mariadb-galera-init-8tc9g 0/1 Completed 0 10m
- livebess probe fail
Environment details:
- Kubernetes version: 1.28.3
- Kubernetes distribution: Vanilla
- mariadb-operator version: v0.0.28
- Install method: helm
- Install flavor: recommended
Additional context
Hey there @gsjeon ! Thanks for bringing this up.
Currently, overriding the default environment variables is not possible. As you pointed out, spec.env
is appended but not overridden.
The reason is that we define the default variables as a list here:
and we append the variables provided by the user in
spec.env
.
Ideally, the default variables should be defined in a map[string]corev1.EnvVar
and the ability of overriding them could be implemented by matching the keys providded by the user in spec.env
. We should preserve the same logic as we currently have and add some tests to make sure this functionality behaves as expected when more changes are introduced.
It is certainly an interesting feature that provides good flexibility. Contributions are welcome!
This issue is stale because it has been open 30 days with no activity.
This issue was closed because it has been stalled for 10 days with no activity.
Hi, I actually looked into that issue, but I don't think there's a problem.
I went ahead and ran the operator locally:
make cluster
make install
make net
make run
I then applied a slightly modified version of the examples/manifests/mariadb.yaml
file:
kubectl apply -f mariadb.yaml
mariadb.yaml
apiVersion: k8s.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb
spec:
rootPasswordSecretKeyRef:
name: mariadb-root
key: password
generate: true
username: mariadb
passwordSecretKeyRef:
name: mariadb-password
key: password
generate: true
database: mariadb
port: 3306
storage:
size: 1Gi
service:
type: LoadBalancer
metadata:
annotations:
metallb.universe.tf/loadBalancerIPs: 172.18.0.20
myCnf: |
[mariadb]
bind-address=*
default_storage_engine=InnoDB
binlog_format=row
innodb_autoinc_lock_mode=2
innodb_buffer_pool_size=1024M
max_allowed_packet=256M
metrics:
enabled: true
env:
- name: MARIADB_ROOT_HOST
value: "10.200.0.0/255.248.0.0"
The only real difference being:
39a40,43
>
> env:
> - name: MARIADB_ROOT_HOST
> value: "10.200.0.0/255.248.0.0"
Indeed, if you describe the pod, you'll see the MARIADB_ROOT_HOST
variable being listed twice:
However, if you open an actual shell inside the container and grep
for the MARIADB_ROOT_HOST
variable, you'll notice that the value from the spec
is used:
$ kubectl exec -it mariadb-0 -- env | grep MARIADB_ROOT_HOST
MARIADB_ROOT_HOST=10.200.0.0/255.248.0.0
If you further log in through the MariaDB CLI and issue a query against the mysql.user
table, you'll notice that the host is appropriately set:
$ kubectl exec -it mariadb-0 -- bin/bash
mysql@mariadb-0:/$ mariadb -u root -p$MARIADB_ROOT_PASSWORD
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 157
Server version: 10.11.8-MariaDB-ubu2204 mariadb.org binary distribution
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> select user, host from mysql.user;
+-------------+------------------------+
| User | Host |
+-------------+------------------------+
| root | 10.200.0.0/255.248.0.0 |
| healthcheck | 127.0.0.1 |
| healthcheck | ::1 |
| healthcheck | localhost |
| mariadb.sys | localhost |
| root | localhost |
+-------------+------------------------+
6 rows in set (0.002 sec)
MariaDB [(none)]>
In other words, for duplicate environment variable keys, the latest value seems to take precedence.
Thanks for the investigation @mbezhanov ! Interesting! FYI @gsjeon
This works just because the extra env
is set after the internal environment, which might not be the case in the future. I would suggest refactoring the mariadbEnv
func anyway to use a map[string]string
to have a more robust solution.
I think we should also document the env
field to clearly indicate that overriding env is supported.
@mmontes11 I'd be happy to rework mariadbEnv
in container_builder.go
to use a map instead of a list if you feel it'll be cleaner this way.
@gsjeon after redefining the MARIADB_ROOT_HOST
variable in your manifest, how did you apply the changes to your cluster? Was it a kubectl apply
on an existing mariadb
object, which didn't initally have these spec.env
settings? (edited out my previous comment, as I failed to notice some details in your manifest).
@mmontes11 I'd be happy to rework mariadbEnv in container_builder.go to use a map instead of a list if you feel it'll be cleaner this way.
That would be great and very much appreciated! The only requirement is that it should be backwards compatible and have a test to prove that the map is correctly translated to a env slice.
@mbezhanov Thank you so much.
I simply created the manifest shown above.
(Not reusing pvc)
$ sudo kubectl create -n cloudpc -f app-mariadb.yaml
There is still an error.
$ kubectl logs -n cloudpc app-mariadb-0 -c agent
{"level":"info","ts":1719330900.609043,"msg":"Starting agent"}
{"level":"info","ts":1719330900.6104934,"logger":"server","msg":"server listening","addr":":5555"}
{"level":"error","ts":1719330923.2104418,"logger":"handler.probe.liveness","msg":"error getting SQL client","error":"Error 1045 (28000): Access denied for user 'root'@'::1' (using password: YES)","stacktrace":"github.com/mariadb-operator/mariadb-operator/pkg/galera/agent/handler.(*Probe).Liveness\n\t/app/pkg/galera/agent/handler/probe.go:67\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2166\ngithub.com/go-chi/chi/v5.(*Mux).routeHTTP\n\t/go/pkg/mod/github.com/go-chi/chi/v5@v5.0.12/mux.go:459\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2166\ngithub.com/go-chi/chi/v5/middleware.Recoverer.func1\n\t/go/pkg/mod/github.com/go-chi/chi/v5@v5.0.12/middleware/recoverer.go:45\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2166\ngithub.com/go-chi/chi/v5/middleware.(*Compressor).Handler-fm.(*Compressor).Handler.func1\n\t/go/pkg/mod/github.com/go-chi/chi/v5@v5.0.12/middleware/compress.go:209\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2166\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\t/go/pkg/mod/github.com/go-chi/chi/v5@v5.0.12/mux.go:90\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:3137\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:2039"}
When the MARIADB_ROOT_HOST variable is commented out, it will run normally.
$ sudo kubectl get cm -n kube-system kubeadm-config -o yaml |grep podSubnet
podSubnet: 10.233.64.0/18
$ vi app-mariadb.yaml
...
#env:
# - name: MARIADB_ROOT_HOST
# value: "10.233.64.0/255.255.192.0"
...
$ sudo kubectl create -n cloudpc -f app-mariadb.yaml
certificate.cert-manager.io/app-mariadb-tls created
mariadb.k8s.mariadb.com/app-mariadb created
user.k8s.mariadb.com/devuser created
$ kubectl get po -n cloudpc |grep app-maria
app-mariadb-0 2/2 Running 0 3m48s
app-mariadb-1 2/2 Running 0 3m48s
app-mariadb-2 2/2 Running 0 3m48s
@mmontes11 @mbezhanov
What do you think about revising it as below?
$ git diff
diff --git a/api/v1alpha1/mariadb_types.go b/api/v1alpha1/mariadb_types.go
index f0f554db..71cf1d15 100644
--- a/api/v1alpha1/mariadb_types.go
+++ b/api/v1alpha1/mariadb_types.go
@@ -457,6 +457,8 @@ type MariaDBSpec struct {
// +optional
// +operator-sdk:csv:customresourcedefinitions:type=spec
SecondaryConnection *ConnectionTemplate `json:"secondaryConnection,omitempty" webhook:"inmutable"`
+
+ RootHost string `json:"rootHost,omitempty"`
}
// MariaDBStatus defines the observed state of MariaDB
diff --git a/pkg/builder/container_builder.go b/pkg/builder/container_builder.go
index 5aff1d1b..8afb6a61 100644
--- a/pkg/builder/container_builder.go
+++ b/pkg/builder/container_builder.go
@@ -344,6 +344,11 @@ func mariadbEnv(mariadb *mariadbv1alpha1.MariaDB) []corev1.EnvVar {
clusterName = "cluster.local"
}
+ rootHost := "%"
+ if mariadb.Spec.RootHost != "" {
+ rootHost = mariadb.Spec.RootHost
+ }
+
env := []corev1.EnvVar{
{
Name: "MYSQL_TCP_PORT",
@@ -351,7 +356,7 @@ func mariadbEnv(mariadb *mariadbv1alpha1.MariaDB) []corev1.EnvVar {
},
{
Name: "MARIADB_ROOT_HOST",
- Value: "%",
+ Value: rootHost,
},
{
Name: "MYSQL_INITDB_SKIP_TZINFO",
@gsjeon this isn't going to resolve your issue though. I did a little digging and found the root cause, and it turns out to be different from what it originally looked like.
I applied your manifest locally, with and without the MARIADB_ROOT_HOST
setting.
Without MARIADB_ROOT_HOST
, the probes work and mysql.user
looks like this:
root@mariadb-galera-0:/# mariadb -uroot -p$MARIADB_ROOT_PASSWORD -e 'select user, host from mysql.user'
+-------------+-----------+
| User | Host |
+-------------+-----------+
| root | % |
| healthcheck | 127.0.0.1 |
| healthcheck | ::1 |
| healthcheck | localhost |
| mariadb.sys | localhost |
| root | localhost |
+-------------+-----------+
root@mariadb-galera-0:/#
With MARIADB_ROOT_HOST
, mysql.user
looks like this, but the probes stop working:
root@mariadb-galera-0:/# mariadb -uroot -p$MARIADB_ROOT_PASSWORD -e 'select user, host from mysql.user'
+-------------+------------------------+
| User | Host |
+-------------+------------------------+
| root | 10.200.0.0/255.248.0.0 |
| healthcheck | 127.0.0.1 |
| healthcheck | ::1 |
| healthcheck | localhost |
| mariadb.sys | localhost |
| root | localhost |
+-------------+------------------------+
root@mariadb-galera-0:/#
Note the root
user entries. The 10.200.0.0/255.248.0.0
restriction is applied successfully to the root
user account, so your spec.env
is being applied correctly; there's no issue there.
Why do the probes stop working then?
Each mariadb-galera-*
pod consists of a mariadb
container and an agent
sidecar container:
Your probes use the agent
sidecar to perform their checks. The agent
sidecar runs an HTTP server on port 5555
, and the probes basically query http://:5555/liveness
and http://:5555/readiness
.
When an HTTP request arrives at http://:5555/liveness
, the agent
establishes a TCP connection to the MariaDB server to issue the following query and examine its result:
SELECT variable_value FROM information_schema.global_status WHERE variable_name = "wsrep_cluster_status";
From the 8.2.4 Specifying Account Names section of the MySQL manual:
A host value can be a host name or an IP address (IPv4 or IPv6). The name 'localhost' indicates the local host. The IP address '127.0.0.1' indicates the IPv4 loopback interface. The IP address '::1' indicates the IPv6 loopback interface.
From the 6.2.4 Connecting to the MySQL Server Using Command Options section of the MySQL manual:
On Unix, MySQL programs treat the host name
localhost
specially, in a way that is likely different from what you expect compared to other network-based programs: the client connects using a Unix socket file..
To decipher this, having a grant for root@localhost
allows connections through the /run/mysqld/mysqld.sock
socket in the mariadb
container. For TCP connections from agent
to mariadb
, you need to have additional grants for root@::1
and root@127.0.0.1
, but you don't, since you replaced the default root@%
grant with root@10.200.0.0/255.248.0.0
.
@gsjeon Try this as a temporary workaround.
Step 1: Create a new file named initroot.sql
and populate it with the following contents:
SET @pw := (SELECT authentication_string FROM mysql.user WHERE User = 'root' and Host = 'localhost');
SET @sql := CONCAT("CREATE USER IF NOT EXISTS 'root'@'127.0.0.1' IDENTIFIED WITH mysql_native_password AS '", @pw, "';");
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
GRANT ALL ON *.* TO 'root'@'127.0.0.1';
SET @sql := CONCAT("CREATE USER IF NOT EXISTS 'root'@'::1' IDENTIFIED WITH mysql_native_password AS '", @pw, "';");
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
GRANT ALL ON *.* TO 'root'@'::1';
Step 2: Create a ConfigMap
from the initroot.sql
file:
kubectl create configmap initroot --from-file=initroot.sql
Step 3: Add the following volumes
and volumeMounts
declarations to your mariadb.yaml
:
volumes:
. . .
- name: initroot
configMap:
name: initroot
. . .
volumeMounts:
. . .
- name: initroot
mountPath: /docker-entrypoint-initdb.d
mariadb.yaml
---
apiVersion: k8s.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb-galera
spec:
rootPasswordSecretKeyRef:
name: db-sc
key: mariadb-root-password
storage:
size: 100Gi
storageClassName: ceph
replicas: 3
maxScale:
enabled: false
galera:
enabled: true
primary:
podIndex: 0
automaticFailover: true
sst: mariabackup
availableWhenDonor: false
galeraLibPath: /usr/lib/galera/libgalera_smm.so
replicaThreads: 4
providerOptions:
gcs.fc_limit: '64'
agent:
image: ghcr.io/mariadb-operator/mariadb-operator:v0.0.28
port: 5555
kubernetesAuth:
enabled: false
gracefulShutdownTimeout: 5s
recovery:
enabled: true
minClusterSize: 50%
clusterMonitorInterval: 10s
clusterHealthyTimeout: 30s
clusterBootstrapTimeout: 10m
podRecoveryTimeout: 3m
podSyncTimeout: 3m
initContainer:
image: ghcr.io/mariadb-operator/mariadb-operator:v0.0.28
config:
reuseStorageVolume: true
podSecurityContext:
runAsUser: 0
volumes:
- name: localtime
hostPath:
path: /etc/localtime
- name: tls
secret:
secretName: mariadb-galera-tls
- name: initroot
configMap:
name: initroot
volumeMounts:
- name: localtime
mountPath: /etc/localtime
readOnly: true
- name: tls
mountPath: /certs
- name: initroot
mountPath: /docker-entrypoint-initdb.d
service:
type: ClusterIP
primaryService:
type: ClusterIP
secondaryService:
type: ClusterIP
tolerations:
- key: "k8s.mariadb.com/ha"
operator: "Exists"
effect: "NoSchedule"
podDisruptionBudget:
maxUnavailable: 33%
updateStrategy:
type: RollingUpdate
livenessProbe:
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 5
readinessProbe:
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 5
metrics:
enabled: false
env:
- name: MARIADB_ROOT_HOST
value: "10.200.0.0/255.248.0.0"
myCnf: |
[mariadb]
bind-address=*
default_storage_engine=InnoDB
binlog_format=row
innodb_autoinc_lock_mode=2
max_allowed_packet=256M
ignore-db-dirs=lost+found
lower_case_table_names=1
wait_timeout=3600
interactive_timeout=3600
max_connections=2000
open_files_limit=65535
collation_server=utf8_unicode_ci
init_connect='SET NAMES utf8'
character_set_server=utf8
slow_query_log=1
slow_query_log_file=/dev/null
general_log=1
general_log_file=/dev/null
ssl_key=/certs/tls.key
ssl_cert=/certs/tls.crt
ssl_ca=/certs/ca.crt
#require_secure_transport=on
tls_version=TLSv1.2,TLSv1.3
[client]
default_character_set=utf8
Step 4: Run kubectl apply
:
kubectl apply -f mariadb.yaml
The cluster should come up shortly:
mysql.user
table looks like this:
root@mariadb-galera-0:/# mariadb -uroot -p$MARIADB_ROOT_PASSWORD -e 'select user, host from mysql.user order by user asc';
+-------------+------------------------+
| User | Host |
+-------------+------------------------+
| healthcheck | 127.0.0.1 |
| healthcheck | ::1 |
| healthcheck | localhost |
| mariadb.sys | localhost |
| root | localhost |
| root | 10.200.0.0/255.248.0.0 |
| root | 127.0.0.1 |
| root | ::1 |
+-------------+------------------------+
root@mariadb-galera-0:/#
Sorry, I know it's a little dirty, but I couldn't think of anything better. ๐
@mmontes11 would it make sense to try using the healtcheck
user for the liveness and readiness probes instead of root
? As per Using Healthcheck.sh :
By default, (since 2023-06-27), official images will create healthcheck@localhost, healthcheck@127.0.0.1, healthcheck@::1 users with a random password and USAGE privileges.
Its credentials are stored in the /var/lib/mysql/.my-healthcheck.cnf
file:
root@mariadb-galera-0:/# cat /var/lib/mysql/.my-healthcheck.cnf
[mariadb-client]
port=3306
socket=/run/mysqld/mysqld.sock
user=healthcheck
password=fpKQ;fy&Rhxvo8QH5KJ"+`+KWQOi[@F0
protocol=tcp
The only problem is this file isn't distributed across the other nodes in the cluster (i.e., mariadb-galera-0
has the file, but mariadb-galera-1
and mariadb-galera-2
do not), so some better way of persisting/sharing these credentials will have to be implemented.
some better way of persisting/sharing these credentials will have to be implemented.
I cannot really think of a way to efficiently and securly shiping this credentials between nodes in a simple way. Besides, the credentials would need to be available before the probes start, making it even more challenging.
What I do know is a more native way of achieving the same as in #605 (comment). Brilliant investigation here @mbezhanov by the way โญ
Right, so the piece we are missing is USAGE
permissions for 'root'@'127.0.0.1'
and 'root'@'::1'
, this can be achieved with the Grant
CR, which will be eventually reconciled by the operator before the MariaDB
probes start:
apiVersion: k8s.mariadb.com/v1alpha1
kind: Grant
metadata:
name: grant-root-usage
spec:
mariaDbRef:
name: mariadb
privileges:
- "USAGE"
database: "*"
table: "*"
username: root
host: "127.0.0.1"
---
apiVersion: k8s.mariadb.com/v1alpha1
kind: Grant
metadata:
name: grant-root-usage-ipv6
spec:
mariaDbRef:
name: mariadb
privileges:
- "USAGE"
database: "*"
table: "*"
username: root
host: "::1"
We will need to create this grants together with the MariaDB
resource that constraints the MARIADB_ROOT_HOST
, which was the initial need of this issue.
Please let me know if this makes sense and solves your issue. @mbezhanov has tackled the overriding issue in #711 (very much appreciated ๐๐ป), so I would say that if the previous suggested solution works for you, we can close this issue.
Right, so the piece we are missing is
USAGE
permissions for'root'@'127.0.0.1'
and'root'@'::1'
, this can be achieved with theGrant
CR, which will be eventually reconciled by the operator before theMariaDB
probes start:
Actually no, if MariaDB
doesn't become ready (readiness probe ok), the Grant
won't be reconciled. Therefore these permissions need to be available when the container starts, like the solution #605 (comment) proposes.
This issue is stale because it has been open 60 days with no activity.