Simple Node.js API
The application provides a small API consisting of the following endpoints.
Method | Path | Description |
---|---|---|
GET | /uptime | Returns the uptime of the application |
GET | /server/uptime | Returns the uptime of the server |
GET | /health | Show health status of the application |
The following examples uses HTTPie and assumes the application is running on localhost
at port 3000
http :3000/uptime
http :3000/server/uptime
http :3000/health
Run the below command and point your browser to http://localhost:3000.
docker-compose up dev
Any changes to the source code should be automatically reflected.
The limited amount of tests can be found in the src/tests folder.
The tests can be run using the following command.
docker-compose run test
There isn't much to configure but should you wish to change the server port from the default 3000 to anything else it can be done by setting the environment variable PORT
to whichever port you wish.
A Helm chart, and a Helmfile has been implemented for the sake of running this application in production
Any commit to master will be deployed to the development environment.
Any git tagged commit will result in a deployment to production.
The environment variable, env.IMAGE_NAME
, found in .github/workflows/main.yml should be updated to match your Docker registry image name.
A GitHub Action secret of the name KUBECONFIG
containing your raw kubectl
configuration should be present for the clusters you wish to target.
In the following I'll try to give some insights into the toughts I had during development
Docker is a tool which offers great power but can also easily be configured to cause a security breach.
Only configurable aspect of this application is the server port which can be configured externally via an environment variable.
In order for our application to receive signals from the host it should be running as pid 1. It's also best practice to only run one process per container.
Using the below command you can see which id the process is running under
docker-compose exec production ps aux
This should be enforced in the CI/CD pipeline.
It's very important that our process isn't running as root. Should an attacker gain access it will be a lot easier to gain further access if the process already has administrative access.
By running the below command we can see who is running our process
docker-compose exec production whoami
Since we are running on Alpine I choose to use the guest
user.
The important thing to note is that it's a normal user with a limited set of capabilities, and the default shell is /sbin/nologin
.
This should be enforced in the CI/CD pipeline.
I want to make it as easy as possible for the developers working on this app. Therefore I've implemented a Docker Compose service called dev
. The idea is that the developer can be up and running with having to worry about installing any dependencies beside Docker and Docker Compose.
I don't release a Helm package, and I've ignored versioning for the initial release of this app. It would be fairly easy to support different versions running (or being disabled) in different environments.
When dealing with multiple environments it can happen that you accidentally target the wrong environment. For that reason it's mandatory to specify an environment when deploying using the Helmfile, and I've hardcoded the Kubernetes context each environment thus making it harder to target the wrong environment.
The underlying CI/CD system isn't so important, what's important is the processes we implement.
Below is a unexhausted list of things which should be considered in a CI/CD pipeline. I won't go into details since books has been written on each topic.
- Build
- Test
- Integration test
- Smoke test
- Code coverage
- Code analysis
- Scanning of docker image
For a real life application this is a must
My CI/CD implementation currently only supports two environments. I have some ideas about how to expend my current setup to support more but for now I'll leave it as is
I don't compile a helm package nor upload one to a repository. For a serious setup a versioned and signed chart should be created and stored in a repository.
It has been an interesting task. I ended up spending a bit less than 7 hours. A fair amount was spent trying to fix an issue with the pipeline doing as I intended but still failing. Please see my "HACK" note on the last line of the pipeline configuration.
I feel that I implemented a fairly reasonable setup. I test and deploy to dev on every commit.
If a commit is tagged a Docker image with that tag is built and deployed to production. To avoid faulty deploys it might be better to only do so if it's on a release branch.