/efk-stack-app

Logging stack based on Elasticsearch

Primary LanguageMustacheApache License 2.0Apache-2.0

CircleCI

⛔️DEPRECATED efk-stack-app chart

This is repository is no longer actively maintained or updated.

Instead we recommed using either:

More information about audit logging at : https://docs.giantswarm.io/getting-started/observability/logging/audit-logs/

Description

Giant Swarm offers an Opendistro Managed App which can be installed in tenant clusters. Here we define the Opendistro chart with its templates and default configuration.

This application is intended to have a centralized log storage for your applications, it's not meant to be long term storage and is prepared to delete old indexes (7 days by default)

Changelog

Important notes - read before you deploy

  1. This chart runs an Elasticsearch document database. It is recommended to run it with 3 master and 3 data pods for the Elasticsearch deployment. These values are already defaults for this chart. However, to make sure that if you loose a single Kubernetes node you won't loose more than 1 Elasticsearch pods (to operate, Elasticsearch needs at least 2 out of 3 master pods to be up), we use pod anti affinity to forbid running more than 1 master pod on a single Kubernetes node. This means that as an absolute minimum you need to have at least 3 Kubernetes nodes (by default) in your Kubernetes cluster. If you have only 3 nodes (or the configured replication factor for master pods) and you want to roll some of these Kubernetes nodes, please create new nodes first, then delete the old ones. Your Kubernetes cluster needs to have at least 3 nodes at all time - only then your Elasticsearch can survive 1 Kubernetes node crash without data loss.
  2. To ensure that you won't impact your Elasticsearch deployment by accident, we're including PodDisruptionBudgets(PDBs) by default. This means you won't be able to drain your nodes if it violates Elasticsearch's minimum quorum count (2 out of 3 in the default config).

Compatibility

  • AWS: 9.0.0+
  • Azure: 9.0.0+
  • KVM: 9.0.0+ (Persistent volumes are required)

Features

  • OpenDistro Elasticsearch with security enabled
  • Transport certificates autogenerated for 5 years
  • Curator will delete indices older than 7 days
  • Fluentd will collect all logs from pods not deployed in the namespaces: default, giantswarm, efk-stack-app, kube-node-lease, kube-public and kube-system.

Additional information

  • If you change the name of the helm release from "efk-stack-app" to "logging-stack" you will need to adapt default configuration and change all references of "efk-stack-app-opendistro-es-client-service" to "logging-stack-opendistro-es-client-service" in the values file

Running on NFS

Using NFS for storage is not officially supported by Elasticsearch. Problems can happen when Pods are evicted unexpectedly, lock files will be left and so Elasticsearch in the restarted Pod will complain and stop writing data. As a workaround it is possible to set opendistro-es.elasticsearch.deleteLockfiles.enabled: true in the value file. In that case an init container will delete all lock files present on the storage for the Elasticsearch data nodes. Please enable this with care! Only enable this when running on NFS and those issues already showed up.

Components

OpenDistro

OSS Elasticsearch distribution.

Additional info

OpenDistro Certificate Generator

Generates the certificates to start OpenDistro in a secure way. It will create a rootCA and the certificate that will be used to stablish communications between nodes.

Additional info

ElasticSearch Exporter

Exposes Prometheus metrics that can be explored with https://grafana.com/grafana/dashboards/2322

Additional info

ElasticSearch Curator

Manages ElasticSearch indexes lifecycle, by default is configured to delete indices older than 7 days.

Additional info

FluentD

Log collector and parser that will send pod logs to ElasticSearch.

Additional info

Configuration

This chart is composed of multiple helm charts and each can be configured from a single values file with the following format:

opendistro-certs:
  Check ./helm/efk-stack-app/charts/opendistro-certs/values.yaml

elasticsearch-curator:
  Check ./helm/efk-stack-app/charts/elasticsearch-curator/values.yaml

elasticsearch-exporter:
  Check ./helm/efk-stack-app/charts/elasticsearch-exporter/values.yaml

fluentd:
  Check ./helm/efk-stack-app/charts/fluentd/values.yaml

opendistro-es:
  Check ./helm/efk-stack-app/charts/opendistro-es/values.yaml

Check default values file for all components.

This configuration has been tuned by our team to give sane defaults for all components and modifying the internal_users.yaml file should be enough in most cases.

Example Configurations

Here are some example configurations for getting this App running on your cluster. Make sure you change the hosts: keys to something that matches your installation and cluster id.

The files here can be downloaded, edited, and uploaded directly as the 'values.yaml' during the installation step in our Web UI. If you are not using the web interface to install your app, then you must place these values into a user level ConfigMap formatted in the right way and reference it from the App CR. Read our reference on app configuration for more details.

AWS Example Configuration

Do not use in production, just for testing

Default user: admin/admin

Azure Example Configuration

Do not use in production, just for testing

Default user: admin/admin

Security Configuration

Modify internal_users.yml before deploying the application

Default user: admin/test

You will need to create additional resources in kubernetes to make it more secure:

    $ kubectl create secret generic -n efk-stack-app opendistro-security-config --from-file=config_examples/config.yml
    $ kubectl create secret generic -n efk-stack-app opendistro-internal-users --from-file=config_examples/internal_users.yml

You need to change the password values in internal_users.yml file and adjust the values in security_config.yaml accordingly.

Credit