/vagrant_haproxy_vault-dr-clusters

Haproxy setup with two Vault Clusters in DR formation (Primary & Secondary)

Primary LanguageShell

HashiCorp vagrant demo of HAProxy with vault DR-Primary & DR-Secondary clusters

🍏 🍎 For ARM64 versions for Apple Silicon & using VMWare Fusion see /arm64 branch 🍎 🍏

See: LASTRUN.md for details of most recent tests.

This repo is a mock example of two Vault clusters which are serviced by their respective HAProxy Load-Balancer using X-Forward-For.

It's possible to use Vault HSM Enterprise with SoftHSM as an auto-unseal type is possible as detailed below.

📝 Past tests on X86 / AMD64 hosts with Windows (10 & 11) & Linux (Debian 11 & macOS 12.4) using VirtualBox: 7.0.10r158379 + Vagrant 2.3.7 & earlier. 📝

Makeup & Concept

Two sets of Vault clusters are deployed which are labelled as dr1primary & dr2secondary each with a HAProxy (Layer7) reverse proxy managing request from end-users or another Vault clusters. The address of the LB in each cluster is the configured as HCL High Availability parameters & for all the nodes in each cluster

Once successfully launched visit IPs including:

                        VAULT SYS 💻 / USER 😎 REQUESTS
              ▒.................................................▒ 
______________|____________________ 🌍 WAN  ____________________|______________
 API (80,443)╲  ╲ TCP RPC (8200)  |   /    |    TCP RPC (8200)╱  ╱API (80,443)
              ╲  ╲                |  NET   |                 ╱  ╱              
 dr1primary   254.╦═════════════╦ |        | ╓══════════════╦.253  dr2secondary           
                ║ load-balancer ║ |        | ║ load-balancer ║                 
     backend    ║   (haproxy)   ║ |        | ║   (haproxy)   ║    backend      
 ,============. ╚╩═════════════╩╝ |        | ╚╩═════════════╩╝ ,============.  
 |  servers   |          ║        |        |         ║         |  servers   |  
 |.----------.|         ▲▼        |        |         ▲▼        |.----------.|  
 || v1 v2 v3 ||◄► ══ ◄► ═╝        |        |         ╚═◄► ══ ◄►|| v1 v2 v3 ||  
 |'----------'|                   |        |                   |'----------'|  
 | |||||||||| |.183, .182, .181...|        |...173, .172, .171 | |||||||||| |  
 |============|-  RPC & API       |        |        RPC & API -|============|  
__________________________________|        |___________________________________
v1 = vault1, etc...

Prerequisites

The hardware & software requirements needed to use this repo is listed below.

HARDWARE & SOFTWARE

  • RAM 8+ Gb Free minimum - more if with Consul.
  • CPU 8+ Cores Free minimum - more if with Consul.
  • Network interface allowing IP assignment and interconnection in VirtualBox bridged mode for all instances.
    • adjust sNET='en0: Wi-Fi (Wireless)' in Vagrantfile to match your system.
  • Virtualbox with Virtualbox Guest Additions (VBox GA) correctly installed.
  • Vagrant
  • OPTIONAL: 🔒 An enterprise license is needed for HSM Support 🔒

Usage & Workflow

Refer to the contents of Vagrantfile & ensure network IP ranges specific to your setting then vagrant up.

To use Vault Enterprise HSM ensure that a license vault_license.txt is set in directory for each cluster vault_files_dr-primary/ as well as vault_files_dr-secondary/ and that the template is adjusted with version specifics as documented in the Vagrantfile - eg: VV1='VAULT_VERSION='+'1.10.4+ent.hsm' prior to performing vagrant up.

vagrant up --provider virtualbox ;
# // ... output of provisioning steps.

vagrant global-status ; # should show running nodes
  # id       name        provider   state   directory
  # -------------------------------------------------------------------------------------
  # 6127f10  dr1primary-haproxy   virtualbox running /home/auser/hashicorp.vagrant_haproxy_vault-dr-clusters
  # c389198  dr1primary-vault1    virtualbox running /home/auser/hashicorp.vagrant_haproxy_vault-dr-clusters
  # 7d3bb3a  dr1primary-vault2    virtualbox running /home/auser/hashicorp.vagrant_haproxy_vault-dr-clusters
  # 893d929  dr1primary-vault3    virtualbox running /home/auser/hashicorp.vagrant_haproxy_vault-dr-clusters
  # 82f6a8b  dr2secondary-haproxy virtualbox running /home/auser/hashicorp.vagrant_haproxy_vault-dr-clusters
  # 200a2a4  dr2secondary-vault1  virtualbox running /home/auser/hashicorp.vagrant_haproxy_vault-dr-clusters
  # 8259c6d  dr2secondary-vault2  virtualbox running /home/auser/hashicorp.vagrant_haproxy_vault-dr-clusters
  # 28261c9  dr2secondary-vault3  virtualbox running /home/auser/hashicorp.vagrant_haproxy_vault-dr-clusters

vagrant ssh dr1primary-vault1
  # ...
#vagrant@dr1primary-vault1:~$ \
vault status
vault read sys/replication/status -format=json ;
vault read sys/replication/dr/status -format=json ;


# // On a separate Terminal session check status of 2nd Vault cluster.
vagrant ssh dr2secondary-vault1
  # ...
#vagrant@dr2secondary-vault1:~$ \
vault status
VAULT_TOKEN_DR_BATCH=$(cat vault_token_dr_batch.json | jq -r '.auth.client_token') ;
vault operator raft list-peers -dr-token=$VAULT_TOKEN_DR_BATCH ;  # curl -k -X PUT -H "X-Vault-Token: ${VAULT_TOKEN}" -d '{"dr_operation_token":"'$VAULT_TOKEN_DR_BATCH'"}' ${VAULT_ADDR}/v1/sys/storage/raft/configuration ;
# // PROMOTE dr2 cluster as
VAULT_TOKEN_DR_BATCH=$(cat vault_token_dr_batch.json | jq -r '.auth.client_token') ;
vault write /sys/replication/dr/secondary/promote dr_operation_token=${VAULT_TOKEN_DR_BATCH} ;


exit ;
# // ---------------------------------------------------------------------------
# when completely done:
vagrant destroy -f ;
vagrant box remove -f aphorise/debian12 --provider virtualbox ; # ... delete box images

Notes

This is intended as a mere practise / training exercise.

Reference material: