Mongo Replica Set on Kubernetes
nodes
- 1 Kubernetes master
- at least 4 Kubernetes minion nodes
Kubernetes spec deployed -3 replication controllers -3 services
Mongo configuration
- 3 nodes
- 1 primary
Once a three node Mongo replica set is configured on the Kubernetes nodes
- Kill the node running the primary
- Mongo schedules a leader election electing one of the secondaries as a new primary
- Kubernetes reschedules the old primary onto our 4th Kubernetes minion node.
- the newly reestablished replica catches up to the 3 nodes
- election happens again if needed.
Mongo database will have to be seeded with an arbitrary 1gb of seeded data to better simulate a production database going through the rescheduling, re-election, and data catch up process.
a few issues needing to work out
- how to route writes to newly elected leader with Kubernetes
- how to route reads to all replica nodes (if you want to)
- how to constrain pods generated by 3 separate replication controllers to separate nodes (resource constraints)
- how to configure each new node on first deployment to be setup correctly in the replica set.
How to handle automatic ``rs.initiate(config)setup on
kubctl create -f rc.yml'