see the coordinator and worker status in trino status
> kubectl get trinos.tarim.deepexi.com trino-sample -oyaml
apiVersion: tarim.deepexi.com/v1
kind: Trino
metadata:
generation: 1
name: trino-sample
namespace: default
spec:
...
status:
# coordinator status
coordinatorPod:
- cpu: "1"
memory: "1000"
name: trino-sample-coordinator-9c9d4c79b-9tcbd
podStatus: Running
# when coordinator pod is ready, nodeport serice is able to use
ready: true# status for all trino cluster# STOPPED when trino.spec.pause is true# RUNNING when trino.spec.pause is false and all workload is running# TRANSITIONING when trino.spec.pause is false and workload is not ready
status: RUNNING
totalCpu: 3
totalMemory: 3
# worker status
workerPod:
- cpu: "1"
memory: "1"
name: trino-sample-worker-5f57b75674-5hgh2
podStatus: Running
ready: true
- cpu: "1"
memory: "1"
name: trino-sample-worker-5f57b75674-9f9tq
podStatus: Running
ready: true
config example:
apiVersion: tarim.deepexi.com/v1kind: Trinometadata:
name: trino-samplespec:
# cat log config # will use in coordinator and workercataLogConfig:
tpcdsProperties: | connector.name=tpcds tpcds.splits-per-node=4tpchProperties: | connector.name=tpch tpch.splits-per-node=4## coordinator configcoordinatorConfig:
configProperties: | coordinator=true node-scheduler.include-coordinator=false http-server.http.port=8080 query.max-memory=1GB query.max-memory-per-node=512MB query.max-total-memory-per-node=1GB memory.heap-headroom-per-node=512MB discovery.uri=http://localhost:8080jvmConfig: | -server -Xmx2G -XX:+UseG1GC -XX:G1HeapRegionSize=32M -XX:+UseGCOverheadLimit -XX:+ExplicitGCInvokesConcurrent -XX:+HeapDumpOnOutOfMemoryError -XX:+ExitOnOutOfMemoryError -Djdk.attach.allowAttachSelf=true -XX:-UseBiasedLocking -XX:ReservedCodeCacheSize=512M -XX:PerMethodRecompilationCutoff=10000 -XX:PerBytecodeRecompilationCutoff=10000 -Djdk.nio.maxCachedBufferSize=2000000logProperties: | io.trino=INFOnodeProperties: | node.environment=production node.data-dir=/data/trino plugin.dir=/usr/lib/trino/plugin# now coordinator support num is 1num: 1# cpu request, units is core# default 1cpuRequest: 1# memory request, units is m # default 2048memoryRequest: 1000workerConfig:
# is improtant, jvm will not work if use error config.# change query.max-memory,query.max-memory-per-node,query.max-total-memory-per-node,memory.heap-headroom-per-node.# In general:# query.max-memory is bigger then query.max-memory-per-node.# query.max-total-memory-per-node is bigger then memory.heap-headroom-per-node.# coordinator discovery.uri is discovery.uri=http://trino-sample-trino:8080, do not change it.# worker discovery.uri=http://{trino-name}--trino:8080, eg: this file name is trino-sample,so discovery.uri=http://trino-sample-trino:8080configProperties: | coordinator=false node-scheduler.include-coordinator=false http-server.http.port=8080 query.max-memory=1GB query.max-memory-per-node=512MB query.max-total-memory-per-node=1GB memory.heap-headroom-per-node=512MB discovery.uri=http://trino-sample-trino:8080jvmConfig: | -server -Xmx2G -XX:+UseG1GC -XX:G1HeapRegionSize=32M -XX:+UseGCOverheadLimit -XX:+ExplicitGCInvokesConcurrent -XX:+HeapDumpOnOutOfMemoryError -XX:+ExitOnOutOfMemoryError -Djdk.attach.allowAttachSelf=true -XX:-UseBiasedLocking -XX:ReservedCodeCacheSize=512M -XX:PerMethodRecompilationCutoff=10000 -XX:PerBytecodeRecompilationCutoff=10000 -Djdk.nio.maxCachedBufferSize=2000000logProperties: | io.trino=INFOnodeProperties: | node.environment=production node.data-dir=/data/trino plugin.dir=/usr/lib/trino/plugin# num of worker# default 1num: 2# cpu request, units is core# default 1cpuRequest: 1# memory request, units is m # default 2048memoryRequest: 1000# true : crd resource will be created but no pod# default false pause: false# add node port # default truenodePort: true
Release Note
version 0.1
Basis function
add a trino cluster in kubernetes
edit crds to change config, restart the necessary parts
add or delete worker when just change the num
watch deployment and pod, logs trino coordinator and worker status in trinos status items