PostgreSQL High availability configuration by pgpool-II with watchdog.
NOTE: This repository is for self-study.
# vagrant up
vagrant plugin install vagrant-vbguest
vagrant plugin install vagrant-proxyconf # if you needed
vagrant up
vagrant reload # to reflect SELINUX setting change, reload.
# install itamae
# NOTE: in the following command example omit "bundle exec"
bundle install --path vendor/bundle
# Primary node `pg1`: PostgreSQL and pgpool-II
itamae ssh -h pg1 -y node/develop.yml roles/db_master.rb
itamae ssh -h pg1 -y node/develop.yml roles/pgpool.rb
# Standby node `pg2`: PostgreSQL and pgpool-II
itamae ssh -h pg2 -y node/develop.yml roles/db_slave.rb
itamae ssh -h pg2 -y node/develop.yml roles/pgpool.rb
Set up so that ssh connection without passphrase can be connected with postgres
user from both servers.
Note: host names used for the connection are
backend-pg1
andbackend-pg2
.
becauseStreaming Replication
andpg_basebackup
use the backend network.
ref: ./cookbooks/postgresql/templates/var/lib/pgsql/9.6/data/recovery_1st_stage.sh.erb#L18
ref: ./cookbooks/pgpool-II/templates/etc/pgpool-II/pgpool.conf.erb#L65
please checkcommon.backend_prefix
andcommon.hostnames
ofnode/xxx.yml
for the actual host name.
- generate key:
ssh-keygen
on both servers. - copy the contents of the public key to
~/.ssh/authorized_keys
of the other server - and copy the contents of the public key to
~/.ssh/authorized_keys
of the selfcat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
- connect by postgres user
- primary node: pg1
su - postgres
ssh backend-pg1
ssh backend-pg2
- standby node: pg2
su - postgres
ssh backend-pg1
ssh backend-pg2
- primary node: pg1
NOTE: Start up PostgreSQL with
pg_ctl
instead ofsystemctl
.
primary node
ssh pg1
systemctl start postgresql-9.6.service
systemctl start pgpool.service
standby node
ssh pg2
systemctl start pgpool.service
and check node status
pcp_watchdog_info -h pool -U pgpool -v
Password:
Watchdog Cluster Information
Total Nodes : 2
Remote Nodes : 1
Quorum state : QUORUM EXIST
Alive Remote Nodes : 1
VIP up on local node : YES
Master Node Name : backend-pool1:9999 Linux pg1
Master Host Name : backend-pool1
Watchdog Node Information
Node Name : backend-pool1:9999 Linux pg1
Host Name : backend-pool1
Delegate IP : 192.168.1.200
Pgpool port : 9999
Watchdog port : 9000
Node priority : 1
Status : 4
Status Name : MASTER
Node Name : backend-pool2:9999 Linux pg2
Host Name : backend-pool2
Delegate IP : 192.168.1.200
Pgpool port : 9999
Watchdog port : 9000
Node priority : 1
Status : 7
Status Name : STANDBY
pcp_node_info -h pool -U pgpool -v 0
Password:
Hostname : backend-pg1
Port : 5432
Status : 1
Weight : 0.500000
Status Name: waiting
pcp_node_info -h pool -U pgpool -v 1
Password:
Hostname : backend-pg2
Port : 5432
Status : 3
Weight : 0.500000
Status Name: down
for start streaming replication.
pcp_recovery_node -h pool -U pgpool -n 1
If the execution of pcp_recovery_node
fails, if the host OS is windows,
check recovery_1st_stage.sh.erb
or recovery_1st_stage.sh
on the server.
In the case of windows
file cookbooks\postgresql\templates\var\lib\pgsql\9.6\data\recovery_1st_stage.sh.erb
On the server
file /var/lib/pgsql/9.6/data/recovery_1st_stage.sh
change CRLF
to LF
sample
sed -i "s/\r//g" /var/lib/pgsql/9.6/data/recovery_1st_stage.sh
- ssh-copy-id by postgres user
- pg1 => pg2
- pg2 => pg1
- pool1 => pg1
- pool1 => pg2
- pool2 => pg1
- pool2 => pg2
- get database user_name from xxx.yml
-
epel-release
need? => don't need. - database user
- auth:
- repl: Is there anything else you need?
- pgpool: Is
SELECT
need?
- enable login by postgres user
ALTER ROLE
- auth:
- recovery_1st_stage.sh
- archivedir
hostname=$(hostname)
- pg_basebackup
rm -rf $archivedir/*
- recovery.conf
- primary_conninfo