What is the best practices to enable internode encryption?
Closed this issue · 3 comments
Type of question
Are you asking about community best practices, how to implement a specific feature, or about general context and help around casskop ?
Community best practices
Question
What did you do?
I am trying to enable internode encryption across two DCs on different cloud providers.
I am thinking of adding a pre-script to generate keystore per each cassandra node and using secret to mount truststore, its key and the password.
What did you expect to see?
I would wonder if there is something that the operator can help to manage.
What did you see instead? Under which circumstances?
A clear and concise description of what you expected to happen (or insert a code snippet).
Environment
- casskop version:
docker-pullable://orangeopensource/casskop@sha256:c09fe8825cac05c2dc1149536c1eb64f17464499013e3bb97527e260796da96a
- Kubernetes version information:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:51:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
-
Kubernetes cluster kind:
EKS and AKS -
Cassandra version:
v3.11
Hi @yannuil,
In our backlog we need to look at inter dc encryption but not yet with different providers!
As things stand we plan on using service meshs like istio for this.
We can think of other ways too but we are focused on releasing Backup & Restore at this point.
Don't hesitate to elaborate on your thoughts and thanks for opening the issue
Hi @fdehay
Thank you for your reply.
I am currently experimenting with Isitio. With its improved support of connecting vms into a K8S based service mesh, I think I can connect a new K8S based datacenter to an existing vm based datacenter to migrate the data into the K8S cluster.
But it seems that during the gossip shadow round, C* does not like 127.0.0.1
as the source address for all the seeds under C* 3.11.x. It is the case for Istio REDIRECT interception mode, which is the default. Istio offers transparent proxy with original source address preserved under TPROXY but it seems that the feature is currently broken. I opened an issue istio/istio#27565 to see if they are planning to do something with it.
I am new to Kubernetes. After these weeks of learning, I agree that adding a service mesh would be a better tool to enforce encryption and other security related policies.
I am also looking forward to the Backup and Restore feature.
@yannuil good to hear that you're testing it. We spent some time testing Istio and Linkerd2 and both had issues with the Gossip Protocol. I created a few tickets that are supposed to be solved on Istio now but I think I still saw some users complaining about it. Linkerd2 was supposed to add the all traffic support a year ago but the card is still pending on their projects board. We were at least able to get 2 k8s clusters on the same network using Istio. I'm not sure if we were able to get the inter-clusters encryption working but we'll rely on Istio for sure cause we want to avoid having to do the same work with different softwares.
Backup/Restore is coming soon 😉