This demonstrates an advanced nixops deployment of
- the GlusterFS distributed file system
- a replicated setup across 3 machines in AWS Frankfurt
- read-only geo-replication to another gluster cluster (could be on the other side of the world)
- mounting the file system from a client-only node
- use of gluster's SSL support for encryption and server/client authentication
- use of consul to orchestrate volume-initialisation across machines on first boot
- the whole thing running over the tinc VPN for security
all in one declarative and reproducible setup.
After running this deployment, you'll have 3 machines which all have a /glustermount
directory that has the distributed file system mounted.
If you drop some files in there, they will be safely stored with triple-redundancy in Frankfurt, and after a few seconds they will appear on the geo-replicated mirror.
-
An Amazon AWS account
-
AWS credentials set up in
~/.aws/credentials
(see here); should look like this:[myprofilename] aws_access_key_id = AAAAAAAAAAAAAAAAAAAA aws_secret_access_key = ssssssssssssssssssssssssssssssssssssssss
The account must have EC2 permissions.
-
Make sure you have an ssh key configured in the
eu-central-1
region. -
Edit
example-gluster-cluster.nix
, changing:deployment.ec2.accessKeyId = "...";
to yourmyprofilename
from the config file abovedeployment.ec2.keyPair = "...";
to the name of your SSH key as configured in AWS
-
In you AWS account make sure that the
default
security group in theeu-central-1
region lets port665
(TCP
andUDP
) through so thattinc
can communicate across the machines. -
Also make sure that you let SSH from the Internet through so that nixops and you can connect to the machines.
-
nix
andnixops
installed (I'm using versions nix 1.11.8 and nixops 1.5)
Also know that:
example-secrets/
contains example tinc keys, Gluster SSL keys etc. I generated for this purpose. Use them only for testing. The locations in the.nix
files where they are used explain how you can generate your own ones.- The VPN is there fore a reason: Running Consul on the open Internet is not safe. Gluster technically is, but it still find it safer to run that inside the VPN, too.
- You shouldn't forget to shut down all machines when you're done with trying this example.
Run
nixops create -d gluster-test-deployment '<example-gluster-cluster.nix>'
env NIX_PATH=.:nixpkgs=https://github.com/nh2/nixpkgs/archive/4b6f050.tar.gz nixops deploy -d gluster-test-deployment
This should complete without errors and you should have your gluster cluster ready.
(To destroy it when you're done, use nixops destroy -d gluster-test-deployment --confirm
, otherwise you keep paying for the machines.)
Open 3 terminals to 3 different machines:
nixops ssh -d gluster-test-deployment gluster-cluster-1
nixops ssh -d gluster-test-deployment gluster-cluster-2 -t 'watch -n0.1 ls /glustermount'
nixops ssh -d gluster-test-deployment gluster-georep-1 -t 'watch -n0.1 ls /glustermount'
Then in the first terminal, run touch /glustermount/hello
.
You should see the file hello
appear on the second machine immediately, and on the georep machine a few seconds later.