Cannot create directory for local deepstorage
yangyu66 opened this issue · 0 comments
error from task log: /druid/data/indexing-logs/index_kafka_sflow-enriched_0ad96e895edd0ba_gnkjfmjd.log
2022-10-12T20:55:10,945 INFO [[index_kafka_sflow-enriched_0ad96e895edd0ba_gnkjfmjd]-appenderator-persist] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Flushed in-memory data with commit metadata [AppenderatorDriverMetadata{segments={index_kafka_sflow-enriched_0ad96e895edd0ba_0=[SegmentWithState{segmentIdentifier=sflow-enriched_2022-10-12T19:00:00.000Z_2022-10-12T20:00:00.000Z_2022-10-12T19:00:01.059Z_13, state=APPENDING}, SegmentWithState{segmentIdentifier=sflow-enriched_2022-10-12T20:00:00.000Z_2022-10-12T21:00:00.000Z_2022-10-12T20:00:01.059Z_1, state=APPENDING}]}, lastSegmentIds={index_kafka_sflow-enriched_0ad96e895edd0ba_0=sflow-enriched_2022-10-12T20:00:00.000Z_2022-10-12T21:00:00.000Z_2022-10-12T20:00:01.059Z_1}, callerMetadata={nextPartitions=SeekableStreamStartSequenceNumbers{stream='sflow-enriched', partitionSequenceNumberMap={1=2227885}, exclusivePartitions=[]}, publishPartitions=SeekableStreamEndSequenceNumbers{stream='sflow-enriched', partitionSequenceNumberMap={1=2227885}}}}] for segments: 2022-10-12T20:55:10,946 INFO [[index_kafka_sflow-enriched_0ad96e895edd0ba_gnkjfmjd]-appenderator-persist] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Persisted stats: processed rows: [14126], persisted rows[0], sinks: [2], total fireHydrants (across sinks): [15], persisted fireHydrants (across sinks): [0] 2022-10-12T20:55:10,946 INFO [[index_kafka_sflow-enriched_0ad96e895edd0ba_gnkjfmjd]-appenderator-merge] org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Preparing to push (stats): processed rows: [14126], sinks: [2], fireHydrants (across sinks): [15] 2022-10-12T20:55:13,269 WARN [[index_kafka_sflow-enriched_0ad96e895edd0ba_gnkjfmjd]-appenderator-merge] org.apache.druid.java.util.common.RetryUtils - Retrying (1 of 4) in 686ms. java.io.IOException: Cannot create directory '/druid/deepstorage/intermediate_pushes/45e40751-c490-46f5-91dc-cb63bec842a2'. at org.apache.commons.io.FileUtils.mkdirs(FileUtils.java:2200) ~[commons-io-2.11.0.jar:2.11.0] at org.apache.commons.io.FileUtils.forceMkdir(FileUtils.java:1383) ~[commons-io-2.11.0.jar:2.11.0] at org.apache.druid.segment.loading.LocalDataSegmentPusher.pushToPath(LocalDataSegmentPusher.java:92) ~[druid-server-0.22.1.jar:0.22.1] at org.apache.druid.segment.loading.LocalDataSegmentPusher.push(LocalDataSegmentPusher.java:68) ~[druid-server-0.22.1.jar:0.22.1] at org.apache.druid.segment.realtime.appenderator.StreamAppenderator.lambda$mergeAndPush$4(StreamAppenderator.java:886) ~[druid-server-0.22.1.jar:0.22.1] at org.apache.druid.java.util.common.RetryUtils.retry(RetryUtils.java:129) ~[druid-core-0.22.1.jar:0.22.1] at org.apache.druid.java.util.common.RetryUtils.retry(RetryUtils.java:81) ~[druid-core-0.22.1.jar:0.22.1] at org.apache.druid.java.util.common.RetryUtils.retry(RetryUtils.java:163) ~[druid-core-0.22.1.jar:0.22.1] at org.apache.druid.java.util.common.RetryUtils.retry(RetryUtils.java:153) ~[druid-core-0.22.1.jar:0.22.1] at org.apache.druid.segment.realtime.appenderator.StreamAppenderator.mergeAndPush(StreamAppenderator.java:882) ~[druid-server-0.22.1.jar:0.22.1] at org.apache.druid.segment.realtime.appenderator.StreamAppenderator.lambda$push$1(StreamAppenderator.java:744) ~[druid-server-0.22.1.jar:0.22.1] at com.google.common.util.concurrent.Futures$1.apply(Futures.java:713) [guava-16.0.1.jar:?] at com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:861) [guava-16.0.1.jar:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_275] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_275] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_275] 2022-10-12T20:55:14,057 WARN [[index_kafka_sflow-enriched_0ad96e895edd0ba_gnkjfmjd]-appenderator-merge] org.apache.druid.java.util.common.RetryUtils - Retrying (2 of 4) in 2,207ms.
operator file: examples/tiny-cluster.yaml with below changes for remote middlemanagers
middlemanagers: druid.port: 8091 extra.jvm.options: |- -Xmx8G -Xms8G nodeType: middleManager nodeConfigMountPath: /opt/druid/conf/druid/cluster/data/middleManager podDisruptionBudgetSpec: maxUnavailable: 1 ports: - containerPort: 8100 name: peon-0 replicas: 2 resources: limits: cpu: "2" memory: 12Gi requests: cpu: "2" memory: 10Gi livenessProbe: initialDelaySeconds: 30 httpGet: path: /status/health port: 8091 readinessProbe: initialDelaySeconds: 30 httpGet: path: /status/health port: 8091 runtime.properties: |- druid.service=druid/middleManager druid.worker.capacity=6 druid.indexer.task.baseTaskDir=/opt/druid/var/druid/task druid.server.http.numThreads=10 druid.indexer.fork.property.druid.processing.buffer.sizeBytes=1 druid.indexer.fork.property.druid.processing.numMergeBuffers=1 druid.indexer.fork.property.druid.processing.numThreads=1 # Processing threads and buffers on Peons druid.indexer.fork.property.druid.processing.numMergeBuffers=2 druid.indexer.fork.property.druid.processing.buffer.sizeBytes=100000000 druid.indexer.fork.property.druid.processing.numThreads=1 services: - spec: clusterIP: None ports: - name: tcp-service-port port: 8091 targetPort: 8091 type: ClusterIP volumeClaimTemplates: - metadata: name: data-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 50Gi storageClassName: rook-cephfs volumeMounts: - mountPath: /opt/druid/var name: data-volume