This is repository is no longer actively maintained or updated.
Instead we recommed using either:
More information about audit logging at : https://docs.giantswarm.io/getting-started/observability/logging/audit-logs/
Giant Swarm offers an Opendistro Managed App which can be installed in tenant clusters. Here we define the Opendistro chart with its templates and default configuration.
This application is intended to have a centralized log storage for your applications, it's not meant to be long term storage and is prepared to delete old indexes (7 days by default)
- This chart runs an Elasticsearch document database. It is recommended to run it with
3
master
and 3data
pods for the Elasticsearch deployment. These values are already defaults for this chart. However, to make sure that if you loose a single Kubernetes node you won't loose more than 1 Elasticsearch pods (to operate, Elasticsearch needs at least 2 out of 3master
pods to be up), we use pod anti affinity to forbid running more than 1master
pod on a single Kubernetes node. This means that as an absolute minimum you need to have at least 3 Kubernetes nodes (by default) in your Kubernetes cluster. If you have only 3 nodes (or the configured replication factor formaster
pods) and you want to roll some of these Kubernetes nodes, please create new nodes first, then delete the old ones. Your Kubernetes cluster needs to have at least 3 nodes at all time - only then your Elasticsearch can survive 1 Kubernetes node crash without data loss. - To ensure that you won't impact your Elasticsearch deployment by accident, we're including PodDisruptionBudgets(PDBs) by default. This means you won't be able to drain your nodes if it violates Elasticsearch's minimum quorum count (2 out of 3 in the default config).
- AWS: 9.0.0+
- Azure: 9.0.0+
- KVM: 9.0.0+ (Persistent volumes are required)
- OpenDistro Elasticsearch with security enabled
- Transport certificates autogenerated for 5 years
- Curator will delete indices older than 7 days
- Fluentd will collect all logs from pods not deployed in the namespaces: default, giantswarm, efk-stack-app, kube-node-lease, kube-public and kube-system.
- If you change the name of the helm release from "efk-stack-app" to "logging-stack" you will need to adapt default configuration and change all references of "efk-stack-app-opendistro-es-client-service" to "logging-stack-opendistro-es-client-service" in the values file
Using NFS for storage is not officially supported by Elasticsearch. Problems can happen when Pods are evicted unexpectedly, lock files will be left and so Elasticsearch in the restarted Pod will complain and stop writing data. As a workaround it is possible to set opendistro-es.elasticsearch.deleteLockfiles.enabled: true
in the value file. In that case an init container will delete all lock files present on the storage for the Elasticsearch data nodes. Please enable this with care! Only enable this when running on NFS and those issues already showed up.
OSS Elasticsearch distribution.
Generates the certificates to start OpenDistro in a secure way. It will create a rootCA and the certificate that will be used to stablish communications between nodes.
Exposes Prometheus metrics that can be explored with https://grafana.com/grafana/dashboards/2322
Manages ElasticSearch indexes lifecycle, by default is configured to delete indices older than 7 days.
Log collector and parser that will send pod logs to ElasticSearch.
This chart is composed of multiple helm charts and each can be configured from a single values file with the following format:
opendistro-certs:
Check ./helm/efk-stack-app/charts/opendistro-certs/values.yaml
elasticsearch-curator:
Check ./helm/efk-stack-app/charts/elasticsearch-curator/values.yaml
elasticsearch-exporter:
Check ./helm/efk-stack-app/charts/elasticsearch-exporter/values.yaml
fluentd:
Check ./helm/efk-stack-app/charts/fluentd/values.yaml
opendistro-es:
Check ./helm/efk-stack-app/charts/opendistro-es/values.yaml
Check default values file for all components.
This configuration has been tuned by our team to give sane defaults for all components and modifying the internal_users.yaml file should be enough in most cases.
Here are some example configurations for getting this App running on your cluster.
Make sure you change the hosts:
keys to something that matches your installation
and cluster id.
The files here can be downloaded, edited, and uploaded directly as the 'values.yaml' during the installation step in our Web UI. If you are not using the web interface to install your app, then you must place these values into a user level ConfigMap formatted in the right way and reference it from the App CR. Read our reference on app configuration for more details.
Do not use in production, just for testing
Default user: admin/admin
Do not use in production, just for testing
Default user: admin/admin
Modify internal_users.yml before deploying the application
Default user: admin/test
You will need to create additional resources in kubernetes to make it more secure:
$ kubectl create secret generic -n efk-stack-app opendistro-security-config --from-file=config_examples/config.yml
$ kubectl create secret generic -n efk-stack-app opendistro-internal-users --from-file=config_examples/internal_users.yml
You need to change the password values in internal_users.yml file and adjust the values in security_config.yaml accordingly.