signalfx/splunk-otel-collector-chart

Send Log on multiple index with annotation

iacineIT opened this issue ยท 8 comments

What happened?

Hello,

I have an issue with the splunk otel collector.
When I active the annotation
splunk.com/index the log is not sent on the first index and not in the index on the annotation.
I can send the log from the forwarder container. I think the url of splunk is good and the token is good.

Steps to Reproduce

1: Install chart on GKE.
2: Define the values:

clusterName: "xxxx"
splunkPlatform:
  endpoint: "https://xxxx/services/collector"
  token: "xxxx"
  index: "k8s-system_sandbox"
  source: "kubernetes"
  logsEnabled: true
logsEngine: fluentd
cloudProvider: "gcp"
distribution: "gke"

3: Define annotation on the namespace.
kubectl annotate namespaces nginx splunk.com/index=test_web_sandbox

Expected Result

Send log on pod in the namespace nginx in test_web_sandbox index splunk cloud

Actual Result

Not send on test_web_sandbox index and k8s-system_sandbox index

Chart version

Splunk-otel-collector-0.81.0

Environment information

Cloud: GKE
k8s version: 1.26.5-gke.1200
Splunk cloud

Chart configuration

clusterName: "xxxx"
splunkPlatform:
  endpoint: "https://xxxx/services/collector"
  token: "xxxx"
  index: "k8s-system_sandbox"
  source: "kubernetes"
  logsEnabled: true
logsEngine: fluentd
cloudProvider: "gcp"
distribution: "gke"

Log output

error	exporterhelper/queued_retry.go:391	Exporting failed. The error is not retryable. Dropping data.	{"kind": "exporter", "data_type": "logs", "name": "splunk_hec/platform_logs", "error": "Permanent error: \"HTTP/1.1 400 Bad Request\\r\\nContent-Length: 60\\r\\nConnection: keep-alive\\r\\nContent-Type: application/json; charset=UTF-8\\r\\nDate: Tue, 01 Aug 2023 17:02:32 GMT\\r\\nServer: Splunkd\\r\\nVary: Authorization\\r\\nX-Content-Type-Options: nosniff\\r\\nX-Frame-Options: SAMEORIGIN\\r\\n\\r\\n{\\\"text\\\":\\\"Incorrect index\\\",\\\"code\\\":7,\\\"invalid-event-number\\\":1}\"", "dropped_items": 24}
go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send
	go.opentelemetry.io/collector/exporter@v0.81.0/exporterhelper/queued_retry.go:391
go.opentelemetry.io/collector/exporter/exporterhelper.(*logsExporterWithObservability).send
	go.opentelemetry.io/collector/exporter@v0.81.0/exporterhelper/logs.go:124
go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1
	go.opentelemetry.io/collector/exporter@v0.81.0/exporterhelper/queued_retry.go:195
go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func1

Additional context

No response

Hey, can you verify first if indexes test_web_sandbox and k8s-system_sandbox exist in Splunk?
I just want to make sure, as Incorrect index is a typical problem returned from HEC endpoint when an index doesn't exist. Overall, this error originated from Splunk instance itself, so for sure something is configured wrongly.

Hello,
yes the two index exist in Splunk

@iacinedecathlon is the index selected for the above HEC token?

Hello @VihasMakwana ,
Yes :)

Ok, can you run the curl from the place where Splunk Otel Collector Chart is installed? Here is one guide on how to do it.
It is just to verify if it won't return those Incorrect index error.

Also, can you reinstall and see if you have any additional errors present?

For me every time Incorrect index was returned by Splunk either the index had a typo or the HEC token had no permission for the specified index.

It would be really cool if Splunk would return Incorrect index: $nameOfThdIndex. It would make debugging so much easier. Or the splunkexporter could potentially also know and print what index was used.

For me every time Incorrect index was returned by Splunk either the index had a typo or the HEC token had no permission for the specified index.

It would be really cool if Splunk would return Incorrect index: $nameOfThdIndex. It would make debugging so much easier. Or the splunkexporter could potentially also know and print what index was used.

Yeah, it would be great if Splunk does that, but we have what we have ๐Ÿ˜… Incorrect index is the only information being returned in the Splunk's response.

Hello,
I found the problem. It's a problem with Splunk directly. Thank you for your answers.