Code snippets and notes on running Apache Spark with Kubernetes.
Install Elastic Cloud on Kubbernetes Operator
kubectl create -f crds.yaml
kubectl create -f operator.yaml
# confirm operator installation
kubectl -n elastic-system get pod
# Monitor the operator logs:
kubectl -n elastic-system logs -f statefulset.apps/elastic-operator
Create a Elastic cluster:
kubectl apply -f reddot.yaml
Browser to Kibana UI at kibana.purestorage.int
Get elastic user password.
kubectl -n bds get secret reddot-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo
Create index partern: k8s-logs
and k8s-systemd
. Go to Kibana - Stack Management - Index Patterns
Import k8s-logs dashboard using export.ndjson
. Go to Kibana - Stack Management - Saved objects.
Reference:
Create a secret for S3 access keys:
kubectl create secret generic es-s3-keys -n elastic --from-literal=access-key='xxxx' --from-literal=secret-key='vvvv'
Install S3 repository plugin:
kubectl apply -f reddot.yaml
Go to Kibana Stack Management UI to register the repository.
Create a FB S3 repository:
PUT _snapshot/reddot-s3-repo?pretty
{
"type": "s3",
"settings": {
"bucket": "deephub",
"base_path": "elastic/snapshots",
"endpoint": "192.168.170.11",
"protocol": "http",
"max_restore_bytes_per_sec": "1gb",
"max_snapshot_bytes_per_sec": "200mb"
}
}
Check repositories on Repositories UI.
Create a snapshot policy on Stack Management UI.
Demo flow:
- k8s cluster walk through, PSO storage class
- k8s logging with Elastic & FB: search 'error', dashboard
- Scale Elastic: data node count, pure-file, FB UI
- Snapshot to S3: repo setting, s3 ls
s3 ls s3://deephub/elastic/snapshots/