tracer newRecordingSpan memory leak
Opened this issue · 2 comments
Component(s)
No response
What happened?
Description
Observing memory leak
Steps to Reproduce
dedicated for tracing otelcol pods are running in k8s
k8s applications export traces to otlp grpc endpoint
collectors perform probabilistic sampling with equalizing mode and export traces to awsxray
otel configuration provided below
Profiles
cpu, memory and goroutines pprof attached
pprof.otelcol-contrib.samples.cpu.001.pb.gz
pprof.otelcol-contrib.goroutine.001.pb.gz
pprof.otelcol-contrib.alloc_objects.alloc_space.inuse_objects.inuse_space.001.pb.gz
Collector version
0.114.0
Environment information
AWS EKS 1.30
running collectors as containers
OpenTelemetry Collector configuration
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
exporters:
awsxray:
index_all_attributes: true
debug/detailed:
sampling_initial: 1
verbosity: detailed
debug/normal:
verbosity: normal
processors:
batch:
send_batch_max_size: 10000
timeout: 1s
memory_limiter:
check_interval: 1s
limit_percentage: 75
spike_limit_percentage: 15
probabilistic_sampler:
mode: equalizing
sampling_percentage: 1
extensions:
health_check:
endpoint: 0.0.0.0:13133
pprof:
endpoint: localhost:1777
zpages:
endpoint: localhost:55679
service:
extensions:
- health_check
- zpages
- pprof
telemetry:
logs:
level: info
metrics:
address: 0.0.0.0:8888
pipelines:
traces/awsxray:
exporters:
- debug/detailed
- awsxray
processors:
- memory_limiter
- probabilistic_sampler
- batch
receivers:
- otlp
Log output
no errors in the logs
based on the output of the pprof shown here, this looks closely related to #10858
I have reviewed this issue and filed open-telemetry/opentelemetry-go-contrib#6446 to go further and fix it in the OpenTelemetry Go SDK Contrib gRPC library.