Helm error when enabling monitoring for Rook
Opened this issue · 0 comments
johananl commented
lokoctl version: 0.6.1
When re-applying the rook
component after enabling monitoring, the following error is shown:
Applying component 'rook'...
FATA[0005] Applying components failed: applying components: installing component "rook": upgrading release failed: rendered manifests contain a resource that already exists. Unable to continue with update: ServiceMonitor "rook-ceph-mgr" in namespace "rook" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "rook"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "rook" args="[rook]" command="lokoctl component apply"
Cluster config
Sensitive fields are redacted.
cluster "packet" {
asset_dir = "./assets"
cluster_name = "johannes-test"
controller_count = 1
controller_type = "t1.small.x86"
dns {
provider = "route53"
zone = ""
}
facility = "ams1"
project_id = ""
ssh_pubkeys = [""]
management_cidrs = ["0.0.0.0/0"]
node_private_cidr = ""
worker_pool "pool-1" {
count = 3
node_type = "c2.medium.x86"
}
oidc {
issuer_url = ""
}
}
component "rook" {
#enable_monitoring = true
}
component "rook-ceph" {
monitor_count = 3
storage_class {
enable = true
default = true
}
enable_toolbox = true
}
component "prometheus-operator" {
grafana {
ingress {
host = ""
class = "contour"
certmanager_cluster_issuer = "letsencrypt-staging"
}
}
}
Steps to reproduce
lokoctl cluster apply -v --skip-components
lokoctl component apply rook
lokoctl component apply rook-ceph
lokoctl component apply prometheus-operator
Wait for Rook/Ceph and Prometheus to converge.
Enable monitoring:
sed -i 's/#enable_monitoring/enable_monitoring/' cluster.lokocfg
Re-apply the rook
component:
lokoctl component apply rook