Getting Started • Migrating from Smart Agent • Migrating from Splunk Connect for Kubernetes
Configuration • Components • Monitoring • Security • Sizing • Troubleshooting
🚧 This project is currently in BETA. Splunk officially supports this project; however, there may be breaking changes.
The Splunk OpenTelemetry Collector for Kubernetes is a Helm chart for the Splunk Distribution of OpenTelemetry Collector. This chart creates a Kubernetes DaemonSet along with other Kubernetes objects in a Kubernetes cluster and provides a unified way to receive, process and export metric, trace, and log data for:
Installations that use this distribution can receive direct help from Splunk's support teams. Customers are free to use the core OpenTelemetry OSS components (several do!). We will provide best effort guidance for using these components; however, only the Splunk distributions are in scope for official Splunk support and support-related SLAs.
This distribution currently supports:
- Splunk APM via the
sapm
exporter. Theotlphttp
exporter can be used with a custom configuration. More information available here. - Splunk Infrastructure
Monitoring
via the
signalfx
exporter. More information available here. - Splunk Log Observer via
the
splunk_hec
exporter. - Splunk Cloud or
Splunk
Enterprise via
the
splunk_hec
exporter.
The Helm chart currently uses Fluentd by default for Kubernetes logs collection, and supports an option to use native OpenTelemetry logs collection for higher throughput and performance. See the logs collection section for more information, along with performance benchmarks run internally.
This helm chart is tested and works with default configurations on the following Kubernetes distributions:
- Vanilla (unmodified version) Kubernetes
- Amazon Elastic Kubernetes Service including with Fargate profiles
- Azure Kubernetes Service
- Google Kubernetes Engine including GKE Autopilot
- Red Hat OpenShift
While this helm chart should work for other Kubernetes distributions, it may require additional configurations applied to values.yaml.
The following prerequisites are required to use the helm chart:
- Helm 3 (Helm 2 is not supported)
- Administrator access to your Kubernetes cluster and familiarity with your Kubernetes configuration. You must know where your log information is being collected in your Kubernetes deployment.
-
Splunk Enterprise 7.0 or later.
-
A minimum of one Splunk platform index ready to collect the log data. This index will be used for ingesting logs.
-
An HTTP Event Collector (HEC) token and endpoint. See the following topics for more information:
To fully configure the Helm chart, see the advanced configuration.
In order to install Splunk OpenTelemetry Collector in a Kubernetes cluster, at
least one of the destinations (splunkPlatform
or splunkObservability
) has
to be configured.
For Splunk Enterprise/Cloud the following parameters are required:
splunkPlatform.endpoint
: URL to a Splunk instance, e.g. "http://localhost:8088/services/collector"splunkPlatform.token
: Splunk HTTP Event Collector token
For Splunk Observability Cloud the following parameters are required:
splunkObservability.realm
: Splunk realm to send telemetry data to.splunkObservability.accessToken
: Your Splunk Observability org access token.
The following parameter is required for any of the destinations:
clusterName
: arbitrary value that identifies your Kubernetes cluster. The value will be associated with every trace, metric and log as "k8s.cluster.name" attribute.
Run the following commands, replacing the parameters above with their appropriate values.
Add Helm repo
helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart
Sending data to Splunk Observability Cloud
helm install my-splunk-otel-collector --set="splunkObservability.realm=us0,splunkObservability.accessToken=xxxxxx,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector
Sending data to Splunk Enterprise or Splunk Cloud
helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=127.0.0.1:8088,splunkPlatform.token=xxxxxx,splunkPlatform.metricsIndex=k8s-metrics,splunkPlatform.index=main,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector
Sending data to both Splunk Observability Cloud and Splunk Enterprise or Splunk Cloud
helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=127.0.0.1:8088,splunkPlatform.token=xxxxxx,splunkPlatform.metricsIndex=k8s-metrics,splunkPlatform.index=main,splunkObservability.realm=us0,splunkObservability.accessToken=xxxxxx,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector
Consider enabling native OpenTelemetry logs collection for better throughput instead of using the default fluentd engine. Add the following part --set=logsEngine=otel to your installation command if you want to use native OpenTelemetry logs collection.
Instead of setting helm values as arguments a YAML file can be provided:
helm install my-splunk-otel-collector --values my_values.yaml splunk-otel-collector-chart/splunk-otel-collector
The rendered directory contains pre-rendered Kubernetes resource manifests.
Make sure you run helm repo update
before you upgrade
To upgrade a deployment follow the instructions for installing
but use upgrade
instead of install
, for example:
helm upgrade my-splunk-otel-collector --values my_values.yaml
To uninstall/delete a deployment with name my-splunk-otel-collector
:
helm delete my-splunk-otel-collector
To fully configure the Helm chart, see the advanced configuration.
We welcome feedback and contributions from the community! Please see our (contribution guidelines) for more information on how to get involved.
Apache Software License version 2.0.
ℹ️ SignalFx was acquired by Splunk in October 2019. See Splunk SignalFx for more information.