Make exporter configuration more dynamic
Closed this issue · 2 comments
Describe the issue
For now, default (static) configuration file exports metrics to AMP ignoring the option to deploy it only to Cloudwatch. In case if you don't use AMP, and expect your metrics being pushed only to Cloudwatch, the exporter will break and get stuck in CrashLoop, because no AMP URL has been set.
I ended up creating my own configmap and forking the whole chart codebase to make it fit.
data:
adot-config: |
extensions:
health_check:
sigv4auth:
region: us-east-1
receivers:
awscontainerinsightreceiver:
collection_interval:
container_orchestrator:
add_service_as_attribute:
prefer_full_pod_name:
add_full_pod_name_metric_label:
processors:
batch/metrics:
timeout: 60s
exporters:
awsemf:
namespace: ContainerInsights
log_group_name: '/aws/containerinsights/clou-eu-central-1/performance'
log_stream_name: InputNodeName
region: eu-central-1
resource_to_telemetry_conversion:
enabled: true
dimension_rollup_option: NoDimensionRollup
parse_json_encoded_attr_values:
- Sources
- kubernetes
metric_declarations:
# node metrics
- dimensions: [[NodeName, InstanceId, ClusterName]]
metric_name_selectors:
- node_cpu_utilization
- node_memory_utilization
- node_network_total_bytes
- node_cpu_reserved_capacity
- node_memory_reserved_capacity
- node_number_of_running_pods
- node_number_of_running_containers
- dimensions: [[ClusterName]]
metric_name_selectors:
- node_cpu_utilization
- node_memory_utilization
- node_network_total_bytes
- node_cpu_reserved_capacity
- node_memory_reserved_capacity
- node_number_of_running_pods
- node_number_of_running_containers
- node_cpu_usage_total
- node_cpu_limit
- node_memory_working_set
- node_memory_limit
# pod metrics
- dimensions: [[PodName, Namespace, ClusterName], [Service, Namespace, ClusterName], [Namespace, ClusterName], [ClusterName]]
metric_name_selectors:
- pod_cpu_utilization
- pod_memory_utilization
- pod_network_rx_bytes
- pod_network_tx_bytes
- pod_cpu_utilization_over_pod_limit
- pod_memory_utilization_over_pod_limit
- dimensions: [[PodName, Namespace, ClusterName], [ClusterName]]
metric_name_selectors:
- pod_cpu_reserved_capacity
- pod_memory_reserved_capacity
- dimensions: [[PodName, Namespace, ClusterName]]
metric_name_selectors:
- pod_number_of_container_restarts
# cluster metrics
- dimensions: [[ClusterName]]
metric_name_selectors:
- cluster_node_count
- cluster_failed_node_count
# service metrics
- dimensions: [[Service, Namespace, ClusterName], [ClusterName]]
metric_name_selectors:
- service_number_of_running_pods
# node fs metrics
- dimensions: [[NodeName, InstanceId, ClusterName], [ClusterName]]
metric_name_selectors:
- node_filesystem_utilization
# namespace metrics
- dimensions: [[Namespace, ClusterName], [ClusterName]]
metric_name_selectors:
- namespace_number_of_running_pods
service:
pipelines:
metrics:
receivers:
- awscontainerinsightreceiver
processors:
- batch/metrics
exporters:
- awsemf
extensions:
- health_check
- sigv4auth
We need to build a template which will allow users to choose the exporter destination as well as the whole pipeline.
My proposal is to:
Template configuration file with Helm means and set in values the format, whether it's an export to AMP, or to Cloudwatch or both.
What did you expect to see?
Configuration file being adapted for the Cloudwatch use-case
Environment
This issue is environment-agnostic
Additional context
I'm willing to fix it by forking the chart and sending the PR for my proposal. Please let me know what you think about this issue.
This issue is stale because it has been open 90 days with no activity. If you want to keep this issue open, please just leave a comment below and auto-close will be canceled
This issue was closed because it has been marked as stale for 30 days with no activity.