Unable to see the logs on grafana dashboard which is getting stored in PVC in loki distributed
Opened this issue · 1 comments
I am using loki distributed helm chart and configured the PVC to store logs of long duration but I can see only last 3 hrs of data on grafana dashboard.
In ingester logs I can see this
level=info ts=2024-11-14T08:56:06.656395165Z caller=flush.go:167 msg="flushing stream" user=sonarqube fp=d82a7b498cb1cdbf immediate=false num_chunks=1 labels="{app="sonarqube-instance-7847", container="database", filename="/var/log/pods/sonarqube_sonarqube-instance-7847-0_a1dd37e4-44da-453b-b371-6da6831a54b1/database/0.log", job="sonarqube/sonarqube-instance-7847", namespace="sonarqube", node_name="devops-dev-worker-7ac1e493-nn2sb", pod="sonarqube-instance-7847-0", stream="stderr", time="2024-11-14T09:26:01.956230139+01:00"}"
As per my observation, I cant see the data which is flushed on grafana dashboard.
my values.yaml
gateway:
enabled: false
querier:
query_timeout: 5m
persistence:
enabled: true
size: 10Gi
storageClass: ""
annotations: {}
extraContainers:
- name: reverse-proxy
image: k8spin/loki-multi-tenant-proxy:v1.0.0
args:
- "run"
- "--port=3101"
- "--loki-server=https://<hostname>"
- "--auth-config=/etc/reverse-proxy-conf/authn.yaml"
ports:
- name: http
containerPort: 3101
protocol: TCP
resources:
limits:
cpu: 250m
memory: 200Mi
requests:
cpu: 50m
memory: 40Mi
volumeMounts:
- name: reverse-proxy-auth-config
mountPath: /etc/reverse-proxy-conf
extraVolumes:
- name: reverse-proxy-auth-config
secret:
secretName: reverse-proxy-auth-config
extraPorts:
- port: 3101
protocol: TCP
name: http
targetPort: http
ingester:
persistence:
enabled: true
inMemory: false
claims:
- name: data
size: 400Gi
storageClass: ""
loki:
image:
tag: 2.9.8
config: |
auth_enabled: true
server:
{{- toYaml .Values.loki.server | nindent 6 }}
common:
compactor_address: http://{{ include "loki.compactorFullname" . }}:3100
distributor:
ring:
kvstore:
store: memberlist
memberlist:
join_members:
- {{ include "loki.fullname" . }}-memberlist
ingester_client:
grpc_client_config:
grpc_compression: gzip
ingester:
lifecycler:
ring:
kvstore:
store: memberlist
replication_factor: 1
chunk_idle_period: 30m
chunk_block_size: 262144
chunk_encoding: snappy
chunk_retain_period: 1m
max_transfer_retries: 0
max_chunk_age: 24h
wal:
dir: /var/loki/wal
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
max_cache_freshness_per_query: 10m
split_queries_by_interval: 15m
retention_period: 240h
{{- if .Values.loki.schemaConfig}}
schema_config:
{{- toYaml .Values.loki.schemaConfig | nindent 2}}
{{- end}}
{{- if .Values.loki.storageConfig}}
storage_config:
{{- if .Values.indexGateway.enabled}}
{{- $indexGatewayClient := dict "server_address" (printf "dns:///%s:9095" (include "loki.indexGatewayFullname" .)) }}
{{- $_ := set .Values.loki.storageConfig.boltdb_shipper "index_gateway_client" $indexGatewayClient }}
{{- end}}
{{- toYaml .Values.loki.storageConfig | nindent 2}}
{{- if .Values.memcachedIndexQueries.enabled }}
index_queries_cache_config:
memcached_client:
addresses: dnssrv+_memcached-client._tcp.{{ include "loki.memcachedIndexQueriesFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}
consistent_hash: true
{{- end}}
{{- end}}
runtime_config:
file: /var/{{ include "loki.name" . }}-runtime/runtime.yaml
chunk_store_config:
max_look_back_period: 0s
{{- if .Values.memcachedChunks.enabled }}
chunk_cache_config:
embedded_cache:
enabled: false
memcached_client:
consistent_hash: true
addresses: dnssrv+_memcached-client._tcp.{{ include "loki.memcachedChunksFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}
{{- end }}
{{- if .Values.memcachedIndexWrites.enabled }}
write_dedupe_cache_config:
memcached_client:
consistent_hash: true
addresses: dnssrv+_memcached-client._tcp.{{ include "loki.memcachedIndexWritesFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}
{{- end }}
table_manager:
retention_deletes_enabled: true
retention_period: 240h
query_range:
align_queries_with_step: true
max_retries: 5
cache_results: true
parallelise_shardable_queries: true
results_cache:
cache:
{{- if .Values.memcachedFrontend.enabled }}
memcached_client:
addresses: dnssrv+_memcached-client._tcp.{{ include "loki.memcachedFrontendFullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}
consistent_hash: true
{{- else }}
embedded_cache:
enabled: true
ttl: 24h
{{- end }}
frontend_worker:
{{- if .Values.queryScheduler.enabled }}
scheduler_address: {{ include "loki.querySchedulerFullname" . }}:9095
{{- else }}
frontend_address: {{ include "loki.queryFrontendFullname" . }}-headless:9095
{{- end }}
frontend:
log_queries_longer_than: 60s
compress_responses: true
{{- if .Values.queryScheduler.enabled }}
scheduler_address: {{ include "loki.querySchedulerFullname" . }}:9095
{{- end }}
tail_proxy_url: http://{{ include "loki.querierFullname" . }}:3100
compactor:
shared_store: filesystem
working_directory: /var/loki/compactor
compaction_interval: 10m
retention_enabled: true
retention_delete_delay: 2h
retention_delete_worker_count: 150
delete_request_store: filesystem
ruler:
storage:
type: local
local:
directory: /etc/loki/rules
ring:
kvstore:
store: memberlist
rule_path: /tmp/loki/scratch
alertmanager_url: https://alertmanager.xx
external_url: https://alertmanager.xx
# -- Check https://grafana.com/docs/loki/latest/configuration/#schema_config for more info on how to configure schemas
schemaConfig:
configs:
- from: "2020-09-07"
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: loki_index_
period: 24h
# -- Check https://grafana.com/docs/loki/latest/configuration/#storage_config for more info on how to configure storages
storageConfig:
boltdb_shipper:
shared_store: filesystem
active_index_directory: /var/loki/index
cache_location: /var/loki/cache
cache_ttl: 168h
filesystem:
directory: /var/loki/chunks
# -- Uncomment to configure each storage individually
# azure: {}
# gcs: {}
# s3: {}
# boltdb: {}
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: loki-distributed-basic-auth
nginx.ingress.kubernetes.io/auth-secret-type: auth-map
paths:
distributor:
- /api/prom/push
- /loki/api/v1/push
querier:
- /api/prom/tail
- /loki/api/v1/tail
query-frontend:
- /loki/api
ruler:
- /api/prom/rules
- /loki/api/v1/rules
- /prometheus/api/v1/rules
- /prometheus/api/v1/alerts
hosts:
- <hostname>
tls:
- secretName: host-tls
hosts:
- <hostname>
compactor:
enabled: true
persistence:
enabled: true
size: 10Gi
storageClass: ""
annotations: {}
claims:
- name: data
size: 10Gi
storageClass: ""
indexGateway:
enabled: true
persistence:
enabled: true
inMemory: false
size: 10Gi
storageClass: ""
annotations: {}
memcachedChunks:
enabled: true
persistence:
enabled: true
size: 10Gi
storageClass: ""
memcachedFrontend:
enabled: true
persistence:
enabled: true
size: 10Gi
storageClass: ""
memcachedIndexQueries:
enabled: true
persistence:
enabled: true
size: 10Gi
storageClass: ""
memcachedIndexWrites:
enabled: true
persistence:
enabled: true
size: 10Gi
storageClass: ""
Chunks and index are getting created as well.
above is my values.yaml file, could anyone please help me, what I have configured wrong ?
I want to see all logs from the retention period is set on grafana dashboard
Configuration questions have a better chance of being answered if you ask them on the community forums. More people monitor that channel.
I was searching for configuration settings with a 3 hour default, and found this in the Upgrade docs: "query_ingesters_within
under the querier
config now defaults to 3h
, previously it was 0s
. Any query (or subquery) that has an end time more than 3h
ago will not be sent to the ingesters, this saves work on the ingesters for data they normally don't contain. If you regularly write old data to Loki you may need to return this value to 0s
to always query ingesters." https://grafana.com/docs/loki/latest/setup/upgrade/#loki-6