This is proof-of-concept project to spin up scalable web application architecture with Vagrant.
The project involves:
- Vagrant for launching multiple VM instances and persistent storage
- Consul for health checking and service discovery
- Consul Template for automated load balancer management
- nginx for HTTP server load balancing
- HAProxy for MySQL server load balancing
Note: This project is created for just practice. Not suitable for production use.
- Vagrant 1.8.1+: http://www.vagrantup.com/
- VirtualBox: https://www.virtualbox.org/
$ git clone https://github.com/chrisleekr/vagrant-scalable-web-application-architecture.git
$ cd vagrant-scalable-web-application-architecture
$ vagrant up
After vagrant machines are running, you can connect instances to:
- Consul WEB UI: 192.168.100.11:8500
- Web Load Balancing Machine: 192.168.100.20
- DB Load Balancing Machine: 192.168.100.21
After vagrant halt/suspend, need to run provisioning scripts again to synchronize MySQL from master to slave machine
$ vagrant halt
$ vagrant up && vagrant provision
If having the error message 'The guest additions on this VM do not match the install version of VirtualBox!', then run following command before vagrant up
$ vagrant plugin install vagrant-vbguest
- Launch scalable web application architecture in single command
- Used Consul to manage server nodes, service discovery and health checking
- Configured web server load balancing with Consul Template + nginx reserve proxy
- Persistent storage for web servers using Vagrant synced folder
- Configured database server load balancing with Consul Template + HAProxy
- Configured MySQL Two-way Master-Master replication
- Persistent storage for database using Vagrant synced folder
Once all vagrant instances up, you can access Consul Web UI by opening browser http://192.168.100.11:8500. Then you will see services like consul, web-lb, web, db-lb and db. In Nodes section, you will see nodes like consul1, web-lb, web1, db-lb, db1 and so on. If you see services and nodes as following screenshot, then it is successfully up and running.
Now, you can start installing WordPress to test your architecture. Open browser and access to http://192.168.100.20. Then you will see WordPress installation screen like below. You can setup WordPress with following information.
Database Name: wordpress
Username: root
Password: root
Database Host: db-lb.service.consul
Table Prefix: wp_
Site Title: [Any title you want, e.g. Test Website]
Username: [Any username you want, e.g. admin]
Password: [Any password you want]
Your Email: [Any email you want]
After installing WordPress, you can now check server is properly doing load balancing. In the browser, go to http://192.168.100.20/server.php. I added simple PHP script to display web server IP and database hostname. If you see Web Server IP and DB Hostname are changing on refresh, then it is successfully configured.
As this is test environment, you can access DB directly via any MySQL client tool.
db-lb.local
Host: 192.168.100.21
Username: root
Password: root
db1.local
Host: 192.168.100.41
Username: root
Password: root
db2.local
Host: 192.168.100.42
Username: root
Password: root
If you see exactly same tables between db1.local and db2.local databases, then replication is successfully configured.
The Vagrant contains:
- 3 x Consul servers
- 1 x nginx load balancer for web servers
- 3 x Apache web servers
- 1 x HAProxy load balancer
- 2 x MySQL master-master replication servers
Note In order to reduce the launching time, 2 x Consul servers are commented as single Consul server will still work well. Consul recommends to launch at least 3 x Consul servers to prevent single point of failures. In addition, 1 x Apache web server is commented to not launch initially.If you would like to test completed architectures, then uncomment VM definitions.
Following list depicts detailed environment configurations for each VM:
- Consul servers
- Consul 1 - Bootstrap, Web UI
- Private IP: 192.168.100.11
- Hostname: consulserver1.local
- Web UI access URL: http://192.168.100.11:8500
- Consul 2
- Private IP: 192.168.100.12
- Hostname: consulserver2.local
- Commented to not launch in initial checkout
- Consul 3
- Private IP: 192.168.100.13
- Hostname: consulserver3.local
- Commented to not launch in initial checkout
- Consul 1 - Bootstrap, Web UI
- Web server load balancer
- Private IP: 192.168.100.20
- Hostname: web-lb.local
- Web Access URL: http://192.168.100.20:80
- Configured with Consul Template and nginx reverse proxy
- This instance will be access point for internet users.
- Web servers
- Web server 1
- Private IP: 192.168.100.31
- Hostname: web1.local
- Configured with Apache web server
- When the instance is launched, then Consul Template in Web server load balancer will generate new nginx config file.
- Web server 2
- Private IP: 192.168.100.32
- Hostname: web2.local
- Same as Web server 1
- Commented to not launch in initial checkout
- Web server 3
- Private IP: 192.168.100.33
- Hostname: web3.local
- Same as Web server 1
- Commented to not launch in initial checkout
- Web server 1
- Database load balancer
- Private IP: 192.168.100.21
- Hostname: db-lb.local
- Database Access: tcp://192.168.100.21:3306
- Configured with Consul Template and HAProxy
- This instance will be access point for web servers to access database.
- Databases
- Database server 1
- Private IP: 192.168.100.41
- Hostname: db1.local
- This instance is configured Master-Master replication with Database server2.
- Database Name/Username/Password: wordpress/root/root
- When the instance is launched, then Consul Template in Database load balancer will generate new HAProxy config file.
- Database server 2
- Private IP: 192.168.100.42
- Hostname: db2.local
- This instance is configured Master-Master replication with Database server 1.
- Same as Database server 1
- Database server 1
Note This section is a bit descriptive because I would like to make a note in detail about how it works. I want to make detailed instructions to not make same mistakes and for future reference.
- Consul servers will be launched first.
- Consul server 1(consulserver1.local) will be launched and provisioning script will be executed.
- Update package list and upgrade system (Currently commented out. If need, uncomment it)
- Set the Server Timezone to Australia/Melbourne
- Enable Ubuntu Firewall and allow SSH & Consul agent
- Add consul user
- Install necessary packages
- Copy an upstart script to /etc/init so the Consul agent will be restarted if we restart the virtual machine
- Get the Consul agent zip file and install it
- Consul UI needs to be installed
- Create the Consul configuration directory and consul log file
- Copy the Consul configurations
- Start Consul agent
- Consul server 2(consulserver2.local) will be launched and provisioning script will be executed.
- Repeat aforementioned step #1-ii to #1-viii
- Create the Consul configuration directory and consul log file
- Copy the Consul configurations
- Start Consul agent
- Consul server 3(consulserver3.local) will be launched and provisioning script will be executed.
- Repeat aforementioned step #1-ii to #1-viii
- Create the Consul configuration directory and consul log file
- Copy the Consul configurations
- Start Consul agent
- Web load balancer(web-lb.local) will be launched in following.
- Repeat aforementioned step #1-ii to #1-viii
- Create the Consul configuration directory and consul log file
- Copy the Consul configurations
- Start Consul agent
- Install and configure dnsmasq
- Start dnsmasq
- Create consul-template configuration folder and copy nginx.conf template
- Install nginx
- Download consul-template and copy to /usr/local/bin
- Copy an upstart script to /etc/init, so the Consul template and nginx will be restarted if we restart the virtual machine
- Start consul-template and nginx will be started via consul-template
- Web servers will be launched next.
- Web server 1(web1.local) will be launched and provisioning script will be executed.
- Repeat aforementioned step #2-i to 2-vi
- Install apache & php5 packages
- Copy apache site configuration files
- Start apache server
- Download latest WordPress file and extract to /var/www
- Web server 2(web2.local) will be launched and provisioning script will be executed.
- Repeat aforementioned step #3-i to 3-v
- Web server 3(web3.local) will be launched and provisioning script will be executed.
- Repeat aforementioned step #3-i to 3-v
- Database load balancer(db-lb.local) will be launched next.
- Repeat aforementioned step #2-i to 2-vi
- Install MySQL packages - mysql-client
- Install HAProxy
- Create consul-template configuration folder and copy haproxy.conf template
- Download consul-template and copy to /usr/local/bin
- Copy an upstart script to /etc/init so the Consul template and HAProxy will be restarted if we restart the virtual machine
- Start consul-template and HAProxy will be started via consul-template
- Database servers will be launched next.
- Database server 1(db1.local) will be launched and provisioning script will be executed.
- Repeat aforementioned step #2-i to #2-iv
- Install MySQL specific packages and settings - mysql-server mysql-client
- Setup MySQL server
- Move initial database file to persistent directory
- Setting up MySQL DB and root user
- Set up root user's host to be accessible from any remote
- Create replication user
- Create HAProxy user
- Restart MySQL server
- Install and configure dnsmasq
- Start dnsmasq
- Database server 2(db2.local) will be launched and provisioning script will be executed.
- Repeat aforementioned step #5-i to #5-iv
- Setting up MySQL replication, starting with installing sshpass to access SSH to MySQL server 1
- Check MySQL server 1 connection
- Dump wordpress database from MySQL server 1 to /vagrant/data/wordpress.sql
- Import wordpress database to MySQL server 2 from /vagrant/data/wordpress.sql
- Get current log file and position in MySQL server 1
- Change master host to MySQL server 1, log file and position in MySQL server 2 machine
- Get current log file and position in MySQL server 2
- Change master host to MySQL server 2, log file and position in MySQL server 1 machine
- Test replication by creating table called test_table