Shio is a deployment tool. The name comes from the Japanese word for the tide and refers to
- http://www.tidepool.org the organization that it was originally developed for
- The fact that deployments happen in waves -- ok, I just made that up
As a deployment tool, shio is opinionated software and is best used if you agree with those opinions. These opinions, or religion, are described first in this document and then we go into how to interact with shio. The readme ends with how to set up the system.
Shio attempts to create a safe environment for developers to work in and have control of their application without stepping on the toes of system administrators. Namely, we believe that
- Deploys should be repeatable
apt-get
,yum
and other package management tools exist for system administrators, not developers- Configuration should be merged with code at deploy time
- A developer should have the ability to depend on whatever library they feel is best
- A deployable artifact should be versioned and never change
- Once built, a deployable artifact should never have to be built again
- It is a developer's job to make sure that application dependencies are present
- It is a system administrator's job to make sure that they provide a stable, consistent operating system with verified, stable base software.
Shio operates by combining "deployable tarballs" with "config tarballs" and running the application, described in order below.
A deployable tarball is a tar.gz
file with all application dependencies (minus system-dependencies) bundled into it. That sentence has two concepts that require a definition:
- system dependency - A system dependency is something that is installed on all machines in your infrastructure. It is assumed to exist as a part of the system rather than a part of the application. The idea is that your operations team will have one or more canonical machine images and these things are already is installed on those images. These are most commonly things like java, python, ruby, node, etc. The versions (or lifecycle) of these dependencies is less often tied to any specific application and more often tied to a set of tests that have been run on the hardware to make sure that everything operates as expected.
- application dependency - An application dependency is something that is used by the application itself. These are things like ruby gems, node modules or java jars. The versions (or lifecycle) of these dependencies are intimately tied together with the application code and can regularly change from one version of the application to another.
These two concepts also define the boundary between the system administrator and the application developer. The system administrator is in charge of system level dependencies. The developer is in charge of application dependencies. The developer can choose to use whatever they want as long as they can package that thing into the tarball.
A deployable tarball must have a single directory at its root, that directory will be used as the working directory for the application. The basic rule is that if you run tar xzf file.tar.gz
it should create its own directory with things in it rather than dump its contents directly into your current directory.
Other than that restriction, the tarball can contain anything. It generally should contain all application dependencies. If you are running a java program, it should have all of your jars. If you are running a node program, it should have been built including the node_modules
directory. Etc.
A config tarball contains all of the configuration for your application. It has no contract aside from being a gzipped tar file. The config tarball is extracted as is into the single working directory created from extracting the deployable tarball.
That is, if you want a file config.json
to exist in the conf
directory for your application deploy, your config tarball should just contain conf/config.json
.
Configuration files can also have values interpolated into them. This is done via having including an _interpolation.files
in your config bundle. _interpolation.files
should be a JSON array of the file names that you want the agent to interpolate values into.
Interpolation is done via named parameters in the config files. If you have #{parameter}
in a config file that is set to be interpolated, the agent will replace #{parameter}
with its known value of parameter
.
For example, if you have a config.json
file that you want the current host added to, you could create:
-
_interpolation.files
["config.json"]
-
config.json
{ "host": "#{host}"" }
Assuming that the current host is 192.168.1.10
, this config file would be rewritten as
{ "host": "192.168.1.10"}
Note the quotes in the example config.json
file. Shio simply does a replacement of the parameter value and doesn't know anything about the syntax of the file. So, if you need a JSON string, you must wrap it in quotes externally.
You can extend the options available for interpolation by adding anything you want to the self
section of the agent's configuration file. By default, the following parameters are available.
- hostname ->
os.hostname()
- osType ->
os.type()
- platform ->
os.platform()
- arch ->
os.arch()
- cpus ->
os.cpus().length
- mem ->
os.totalmem()
- host ->
config.self.host
- port ->
config.agent.port
- persistentStorage -> A directory that can be used by the process for storage of things that should persist from one deploy to another.
Shio runs the application by calling the start.sh
script. So, after extracting the deployable tarball and the config tarball, a start.sh
script must exist in the working directory.
In general, it is most common (and probably advisable) to store the start.sh
script in the deployable tarball, but depending on your application's needs it can also be a part of the configuration tarball.
start.sh
should generally use exec to run the long-running application process.
Some things to keep in mind when writing a start.sh
script
- Have a preamble. Start your script with
#! /bin/bash -eu
or something similar. Without that directive, it is possible that your script will be run by something that doesn't have everything you expect it to have. - Setup your config.
start.sh
is the only entry-point for shio into the tarball. If you need to do something to setup configuration for your process,start.sh
must do it.
These instructions assume someone else has setup shio for you and you are just using it. Even if you are planning on setting up the software, I recommend reading this first just to get a high-level understanding of what shio provides.
We start with the two main concepts of shio: servers and slots. Servers are the physical machines that you deploy to. Slots are deployment slots that exist on servers for deployment. You can have multiple different slots on the same server and deploy different things to that one server.
You can see a list of your current servers by running bin/shio servers show
, which might look something like
~/shio$bin/shio servers show
machine host osType platform arch cpus mem
i-f5c762c1 172.31.20.133 Linux linux x64 1 1731977216
i-311daf06 172.31.27.46 Linux linux x64 1 1731977216
You can also see the set of slots available with
~/shio$bin/shio slots show
No results match your query
This shows no slots because we haven't created any yet. Let's create one.
~/shio$bin/shio servers createSlot blah
machine host osType platform arch cpus mem
i-f5c762c1 172.31.20.133 Linux linux x64 1 1731977216
i-311daf06 172.31.27.46 Linux linux x64 1 1731977216
This creates the blah
slot on the machines and spits out the list of machines that the verb, createSlot
applied to.
We can verify that they were created by running
~/shio$bin/shio slots show
machine slot host binary binaryVersion config configVersion state
i-f5c762c1 blah 172.31.20.133 _empty _empty _empty _empty STOPPED
i-311daf06 blah 172.31.27.46 _empty _empty _empty _empty STOPPED
This is a great time to introduce an important concept for working with shio: filters! The shio command line operates by applying verbs (like show
) to a list of slots or servers. The list that the verb gets applied to can be controlled through the use of filters. The list of available filters is a part of the help text, so just run bin/shio servers -h
or bin/shio slots -h
to see the list. I'll just discuss and use two in this doc: -m
and -s
, the "machine" and "slot" filters.
Let's filter down the servers listing real quick
~/shio$bin/shio servers -m i-f5c762c1 show
machine host osType platform arch cpus mem
i-f5c762c1 172.31.20.133 Linux linux x64 1 1731977216
It only shows us that one machine now. Now let's create a slot here too.
~/shio$bin/shio servers -m i-f5c762c1 createSlot anotherSlot
machine host osType platform arch cpus mem
i-f5c762c1 172.31.20.133 Linux linux x64 1 1731977216
We now have 3 total slots!
~/shio$bin/shio slots show
machine slot host binary binaryVersion config configVersion state
i-f5c762c1 blah 172.31.20.133 _empty _empty _empty _empty STOPPED
i-f5c762c1 anotherSlot 172.31.20.133 _empty _empty _empty _empty STOPPED
i-311daf06 blah 172.31.27.46 _empty _empty _empty _empty STOPPED
If we wanted to just operate on the blah slots, we can do
~/shio$bin/shio slots -s blah show
machine slot host binary binaryVersion config configVersion state
i-f5c762c1 blah 172.31.20.133 _empty _empty _empty _empty STOPPED
i-311daf06 blah 172.31.27.46 _empty _empty _empty _empty STOPPED
We can delete the slots with the deleteSlot
verb
~/shio$bin/shio servers deleteSlot anotherSlot
machine host osType platform arch cpus mem
i-f5c762c1 172.31.20.133 Linux linux x64 1 1731977216
i-311daf06 172.31.27.46 Linux linux x64 1 1731977216
~/shio$bin/shio slots show
machine slot host binary binaryVersion config configVersion state
i-f5c762c1 blah 172.31.20.133 _empty _empty _empty _empty STOPPED
i-311daf06 blah 172.31.27.46 _empty _empty _empty _empty STOPPED
Now, to deploy things, we're going to assume you already have tarballs created and in the right locations. If you don't, just imagine that you did and that these things would magically work for you. You've gotten this far, so you are probably good at the imagination thingie.
~/shio$bin/shio slots assign my_app 0.2.4-1 demo 2013-11-10
machine slot host binary binaryVersion config configVersion state
i-311daf06 blah 172.31.27.46 my_app 0.2.4-1 demo 2013-11-10 RUNNING
i-f5c762c1 blah 172.31.20.133 my_app 0.2.4-1 demo 2013-11-10 RUNNING
Here we assigned a specific binary and a specific config to all slots. Note that there are 4 arguments being passed to the assign
verb: <binary> <binaryVersion> <config> <configVersion>
. Shio downloaded the relevant tarballs and splatted them over each other as described above and ran the start.sh
script.
Let's stop just one of them
~/shio$bin/shio slots -m i-f5c762c1 stop
machine slot host binary binaryVersion config configVersion state
i-f5c762c1 blah 172.31.20.133 my_app 0.2.4-1 demo 2013-11-10 STOPPED
And admire our work
~/shio$bin/shio slots show
machine slot host binary binaryVersion config configVersion state
i-311daf06 blah 172.31.27.46 my_app 0.2.4-1 demo 2013-11-10 RUNNING
i-f5c762c1 blah 172.31.20.133 my_app 0.2.4-1 demo 2013-11-10 STOPPED
You can start it up again with the start
command and unassign a slot with the unassign
command as well.
Shio requires two processes to run.
- Coordinator(s)
- Agent(s)
The coordinator maintains state about what machines exist and is the general thing that the client interacts with. It is possible to have multiple of these operating behind a load balancer for availability if that is a concern.
The coordinator can be run from
npm run-script coordinator
The agents exist on each individual machine and they enact the will of the coordinator. They maintain the local state of what slots exist and what is in those slots.
The agent can be run with
npm run-script agent
Tarballs are stored in S3 buckets under well-known paths.
You can configure the s3 buckets by setting the parameters XXX
Binary tarballs follow the path structure:
Config tarballs follow the path structure:
Check out conf/default-config.json
for shio's configuration options and their defaults. You can override any of the defaults by creating a config.json
file in the root directory of shio and specifying new values for the fields you want to override.
You can add new config interpolation properties by putting them in the "self" portion of the config. Everything added there on the agent processes will be available for interpolation.
-
Log into the machine and do:
sudo su - service shio stop rm /etc/init/shio* rm -rf ~tidepool-deploy/shio rm -r /mnt/shio rm /var/log/upstart/shio*
- Document tarball storage
- Make a verb that allows you to list the binaries and versions available. Same for configs.
- Make a verb that pulls the config down for you to inspect locally
- Persist state about nodes so that servers that disappear can be viewed from the servers view
- Make a verb that allows you to push tarballs through shio instead of directly into s3
- Make the slot creation command actually show you the slots you created