- Node Apps in Cloud Native Docker
- Compose for Awesome Local Development.
- Making production ready images.
- Running production Node.js Containers.
- Docker Desktop (prefered Win/mac)
- Docker Toolbox (Win 7/8/10 Home)
- Linux: Install via Docker Docs
- docs.docker.com
- CLI: docker-compose (Separate binary written in python)
- Included in Docker Desktop & Toolbox
- Linux : pip install docker-compose
$ docker version
18.09.1
$ docker-compose version
1.24
- We talk about 2 parts. ClI and YAML files.
- Designed aroud developer workflows.
- Docker-compose CLI a substitute for docker cli
Docker compose for Development workflow is the best workflows instead of docker cli.
- Docker standard (not yet industy std)
- Defines multiple containers, networks , volumes, etc.
- Can layer sets of YAML files, use templates, variables, and more.
- docker-compose.yml default
Compose files are created with YAML markup language.
Read more on compose file from Compose Docs
- Common configuration file format.
- Used by Docker, Kubernetes, Amazon and others.
- : used for Key/value pairs.
- Only spaces, no tabs
-
- used for
- Myth busting : V3 does not replace v2
- v2 focus: single-node dev/test
- v3 focus: multi-node orchestration.
- If not using Swarm/kebernetes, stick to v2
- Don't have to relearn new skills.
- Reduce typing at command line.
- many docker commands == docker-compose.
- IDE's now supporst docker-compose.
- "batteries included, but swappables" - There are many defaults
- Docker compose cli and YAML versions differ.
Overview of docker-compose CLI Documentation
- "one stop shop"
- build/pull image(s) if missing
- Create volumes/networks/container(s)
- start container(s) in foreground (-d to detach)
- __build to always build.
- "one stop shop"
- Stop and delete networks/container(s)
- use -v to delete volumes.
- Many commands take "service" option.
- build just build/rebuild image(s)
- stop Just stop containers don't delete.
- ps list "services"
- push images to resistry.
- logs same as docker cli.
docker-compose build __no-cache
docker-compose ps
if you are using docker on windows. You will have to use the ip address of the docker machine.
run the below command to get the ip address.
docker-machine ls
If you change the dockerfile and image exists in the machine. You will have to pass a __build command for docker compose to build the the new changes.
docker-compose up -d __build
This gets you inside the container shell, and you could now start to run things inside the container.
-
Node.js FROM images.
-
CentOS custom Image.
-
Lock Down Containers.
-
Make images Efficient.
-
Using COPY instead of ADD. ADD do alot of stuff (downloading files from internet, untar files)
-
npm/yarn install during build.(use the defaults with node containers.) . Make sure you are cleaning after words
-
CMD node, not npm
- requires another applicaiton to run.
- not as literal in DockerFiles.(be supper efficient)
- npm doesn't work well as an init or PID 1 process.
- WORKDIR not Run mkdir
- Unless you need chown
- Stick to even numbered major releases.
- Don't use :latest tag
- Start with Debian if migrating
- Move to Alpine later
- Don't use : slim
- Don't use :onbuild
- Alpine is "small" and "sec focused"
- But debian/Ubuntu are smaller now too.
- ~100MB space savings isn't significant.
- Alpine has its own issues.
- Alpine CVE scanning fails.
- Enterprises may require CentOS or Ubuntu/Debian.
- Install Node in the official CentOS
- Copy Dockerfile lines from node:10
- use Env to specify node version.
- This will take a few tries.
- Useful for knowing how to make your own node, but only if you have to.
This is the process of creating a custom image.
This is a common issue that will encounter with permissions problems.
- Official node images have a node user.
- But it's not used by default.
- Do this after apt/apk and npm i -g
- Do this before npm i
- May cause permissions issues with write access.
- May require chown node:node
- Change user from root to node.
- USER node
- Set permissions on app dir.
- RUN mkdir app && chown -R node:node .
When you run docker-compose exec you will usually enter the container as the node user. If you ever want to change that you can use.
docker-compose exec -u root
- root user have access to anything.
It great to have your applications using the node user instead of root user which is the default.
-
To enable the node user in you container we use the user command.
-
Be cautious on the ordering problems of these line.
-
Command that need root user should be above the USER command.
-
Creating the working directory of the app of permissions node:node
RUN app && chown -R node:node .
USER node
-
-
Also ensure you copied project is owned by node user.
- ensure consistency in permissions.
- COPY chown=node:node . .
dockerfile
FROM node:10-slim
EXPOSE 3000
WORKDIR /node
COPY package*.json ./
RUN mkdir app && chown -R node:node .
USER node
RUN npm install && npm cache clean --force
WORKDIR /node/app
COPY --chown=node:node . .
CMD ["node", "app.js"]
docker-compose.yml
version: "2.4"
services:
hapi:
build: .
init: true
ports:
- 3000:3000
volumes:
- .:/node/app
- /node/app/node_modules
The focuss is on build speeds and storage space.
-
using a small base image.
-
Line order matters. (due to cache-busting)
- Line that don't change put them top.
- Example Expose
- Line that don't change put them top.
-
Copying twice.
- COPY package.json package-lock.json- ./
- copy only the package.json and lockfile
- run npm install
- You want **docker to create a cache on installind node_modules.
- You want to avoid running npm install when code changes.
- Copy everything else
-
One apt-get per dockerfile and be uptop.
FROM node:10.15-slim
ENV NODE_ENV=production
WORKDIR /node
COPY package.json package-lock*.json ./
RUN npm install && npm cache clean --force
WORKDIR /node/app
COPY . .
CMD ["node", "./bin/www"]
- Lifetime Event in containers.
- Correcting Node Assumptions.
- Properly Replacing Node.
- no need for nodemon, forever, or pm2 on servers.
- We'll use nodemon in dev for file watch during developments
- Docker manages app start, stop, resstart, healthcheck
- Node multi-thread: Docker manages multiple "replicas"
- One npm/node problem: They don't listen for proper shutdown signals by default.
-
PID 1 (Process Identifier) if the first process in a system (or container) AKA init.
-
Init process in a container has two jobs
- Reap zombie processes.
- pass signals to sub-processes.
-
Zombie not a big Node issue.
-
Focus on proper Node shutdown.
- Docker uses Linux signal to stop app (SIGINT/SIGTERM/SIGKILL)
- Avoid using SIGKILL (forcefull shutdown)
- SIGINT/SIGTERM allow gracefull stop.
- For node we need to ensure, nodes cleans out any files it was reading to.
- Gracefully shutdown with http connections
- Npm doesn't respond to SIGINT/SIGTERM
- Node doesn't respond by default but can with code.
- Docker provides a init PID 1 replacement option. (Tini)
- Temp: use init to fix ctrl-c for now
- Workaround: add tini to your image.
- Productions: your app captures SIGINT for proper exit.
docker run --init -d nodeapp
-
Add tini to your Dockerfile, then use it in CMD (permanent workaround)
- ENTRYPOINT ["/sbin/tini", "__"]
- CMD [ "node", "./bin/www" ]
-
Use JS snippet to properly capture signals (production solution)
./sample-graceful-shutdown/sample.js
// place this code in your node app, ideally in index.js or ./bin/www
// you need this code so node will watch for exit signals
// node by default doesn't handle SIGINT/SIGTERM
// docker containers use SIGINT and SIGTERM to properly exit
//
// signals also aren't handeled by npm:
// https://github.com/npm/npm/issues/4603
// https://github.com/npm/npm/pull/10868
// https://github.com/RisingStack/kubernetes-graceful-shutdown-example/blob/master/src/index.js
// if you want to use npm then start with `docker run --init` to help, but I still don't think it's
// a graceful shutdown of node process, just a forced exit
//
// quit on ctrl-c when running docker in terminal
process.on("SIGINT", function onSigint() {
console.info(
"Got SIGINT (aka ctrl-c in docker). Graceful shutdown ",
new Date().toISOString()
);
shutdown();
});
// quit properly on docker stop
process.on("SIGTERM", function onSigterm() {
console.info(
"Got SIGTERM (docker container stop). Graceful shutdown ",
new Date().toISOString()
);
shutdown();
});
// shut down server
function shutdown() {
// NOTE: server.close is for express based apps
// If using hapi, use `server.stop`
server.close(function onServerClosed(err) {
if (err) {
console.error(err);
process.exitCode = 1;
}
process.exit();
});
}
- Make a Dockerfile for existing Node app.
- use ./dockerfile/Dockerfile.
- Start with node 10.15 on alpine.
- Install tini, start node with tini.
- copy package/lock files first then npm, then copy other code files.
Tini is the simplest init you could think of. All Tini does is spawn a single child (Tini is meant to be run in a container), and wait for it to exit all the while reaping zombies and performing signal forwarding.
Using Tini has several benefits:
- It protects you from software that accidentally creates zombie processes. which can (over time!) starve you entire system for PIDs
- It ensures that the default signal handles work for the software you run in your Docker image. For example, with Tini, SIGTERM properly terminates your process even if you didn't explicitly install a signal handler for it.
- It does so completely transparently! Docker images that work without Tini will work with Tini without any changes.
- Manual install of tini.
Add Tini to your container, and make it executable. Then, just invoke Tini and pass your program and its arguments as arguments to Tini.
FROM node:10.22-alpine
EXPOSE 3000
RUN apk add __no-cache tini
WORKDIR /usr/src/app
COPY package.json package.lock-.json ./
RUN npm install && npm cache clean __force
COPY . .
# Tini is now available at /sbin/tini
ENTRYPOINT ["/sbin/tini", "__"]
CMD ["node", "app.js"]
Note: this is for alpine. When using a default node images there are usually debian based. Make sure you use the write package manager.
- Using Tini. Injected at runtime
- If you are using Docker 1.13 or greater. Tini is included in Docker itself.
- just pass the --init flag to docker run
-
Use ./dockerfile/
-
Run with tini built in, try to ctrl-c
- The container shutdown faster.
-
Run with tini built in, try to stop.
- The same happens. tini process is PID1 still.
-
Remove EntryPoint , rebuild.
-
Add --init to run command, ctrl-c/stop
- There is a 10 second delay.
- Node does not receive the termination signal.
- Docker waits for 10s before a forcefull shutdown
-
Bonus: add signal watch code.
using the assingment-dockerfile folder
- Building a container from a dockerfile
docker build . -t assignment.
- Running container Images with docker.
docker run -p 8080:3000 assingment.
- Running container in detached mode. This command produce a sha1 file
docker run -d -p 8080:3000 assingment.
- Stoping the background container using sha1 text.
docker stop <sha1-text>
- Check programs running using (linux utilities)
docker top <sha1-text>
- Multi-Stage builds.
- Docker BuildKits.
- Build A 3-stage Image.
- SSH Agent In Builds
- New feature in 17.06 (mid-2017)
- Build Multiple Images from one file.
- Those images can From each other.
- COPY files between them.
- Space + security benefits.
example.
- To build dev image for dev stage.
docker build -t myapp .
- To build prod image from prod stage.
docker build -t myapp:prod --target prod .
- Add a test stage that runs npm test.
- Have CI build --target test stage before building prod.
- Add npm install --only=development to dev stage
- Create a Docker from ./Multi-stage-dockerfile
- Create three stages for prod, dev and test.
- Prod has no devDependencies and runs node.
- Dev includes devDep, runs nodemon.
- Test has DevDep, runs npm test
- Goals: don't repeat lines.
FROM node:10-slim as prod
ENV NODE_ENV=production
EXPOSE 3000
WORKDIR /app
COPY package-.json ./
RUN npm install __only=production && npm cach clean __force
COPY . .
CMD [ "node" ,"./bin/www"]
FROM prod as dev
ENV NODE_ENV=development
RUN npm install __only=development
CMD ["node_modules/nodemon/bin/nodemon.js", "./bin/www __inspect=0.0.0.0:9229" ]
FROM dev as test
ENV NODE_ENV=development
RUN npm test
- The above dockerfile has three stages prod, dev and test
- To build the prod image.
docker build -t multistage **target prod . && docker run multistage
- To run the development image. (using tini to for grafull shutdown)
docker build -t multistage:dev **target dev . && docker run multistage **init -p 3000:3000 multistage:dev
- To run the test container layer.
docker build -t multistage:test **target test . && docker run **init -p 3000:3000 multistage:test
-
Buildkit. It's a new way to build your images, and a replacement "build engine"
-
It is an optinal feature with quite a few benefits over traditional docker build commands.
-
Buildkit doesn't work with docker-compose so it can't be used for local development.
-
Break up your local dev workflow into manual steps such as:
- Use docker build commands for node.js image.
- Then use docker-compose for the rest.
- Most images builds will be faster.
- Some re-builds will be much faster.
- It ignores stages in multi-stage that aren't needed. This saves considerable time once you have many stages for different uses.
- Mount host paths and secrets during builds so they are never stored in images.
- Mount host ssh-agents so builds can use your private keys for private NPM modulse witho copying to images.
- Mount package manager caches so they can reuse package downloads between builds (apt, apk, yum, npm, yarn, etc)
- Future: add features to buildKit "frontends" without needing a new version of Docker. We can control BuildKit version in DockerFile(optional.)
- No windows Container support yet (only works in Linux container)
- No docker-compose support yet.
- No UCP (Docker Enterprise) support yet
- Various registry limitation including using private or insecure registries (fixes in progress.) -Bugs are still being discoverde and worked on at moby/buidkit
- Not enabled by default.
- Some features require enabling experimental mode in Docker Engine.
- some features require Dockerfile command that are not backwords-compatible.
- You can set an environment variable DOCKER_BUILDKIT=1 to enable it for your current shell.
-You could also update the docker engine config to enable it permanently when you're ready to got in on Buildkit.
- Enable in Bash/zsh and Toolbox's Quickstart Terminal with:
- export DOCKER_BUIDKIT=1
- Enable in PowerShell with : $env:DOCKER_BUIDKIT=1
- Optinally , enable permanently in Docker Desktop by updating prefereces/settings Daemon advanced" Json of {"features: {"buildkit": true}}
- Enable permanently in Linux host by updating the /etc/docker/daemon.json file with {"features: {"buildkit": true}}
If your Node project has private git repos for node modules, it'll need a particular setup so ssh can be used when building the images.
The previous solution before Buildkit was:
- Use multi-stage builds.
- COPY a decrypetd-private-key in to an early stage where npm install is run.
- COPY the node_modules from that stage to a new image that doesn't include the key.
That solution worked if your're ok with having the ssh key stored in your local docker engine images, but if wasn't ideal, and didn't work with encrypted ssh kes that required a passphrase.
The new way is to use BuildKit with the ssh-agent feature, and is much more secure.
- Setup ssh-agent and your keys on the host OS like normal.
- Add this as the first line in our Dockerfile
- # syntax = docker/dockerfiles:experimental
- Start our Dockerfile npm install line with this : RUN mount=type=ssh
- Run docker build with ssh default as an additional option to enable the feature for that build.
You will need to build images manually due to the fact that this in not yet support by docker-compose.yml
- check ./sample-buildkit-ssh
If you ever change a Dockerfile line before the Run npm install line or you change your package.json or lock file, Docker will need to re-run npm install build. Docker , by default, won't re-use package manager download caches line the NPM cache.
If you have large package.json file with slow dependency install due to large downloads you can speed up rebuilds by enabling buildkit caching feature on specific directories inside your docker builds.
To set this up for re-using the NPM download cache.
- Add this as the first line in our Dockerfile
- # syntax = docker/dockerfiles:experimental
- Start your Dockerfile npm instal line with: RUN mount=type=cache target/root/.npm/_cacache
dockerfiles could look like this.
# syntax = docker/dockerfile:experimental
FROM alpine
# Install ssh client and git
RUN apk add --no-cache openssh-client git
# Download public key for github.com
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
# Clone private repository
RUN --mount=type=ssh git clone git@github.com:youruser/privaterepo.git
- Follow 12factor.net priciples, especially
- Use Environmental Variable for config.
- Log to stdout/stderr.
- Pin all version, even npm.
- Graceful exit SIGTERM/INIT
- Create a .dockerignore
- Containers are almost always distributed apps.
- Good news: You get many of these by using Docker.
- Lets focus on a few for Node.js
-
Heroku wrote a highly respected guide to creating distributed apps.
- Twelve factors to consider when developing or designing distributed apps.
-
12factor.net/config
- Store environment config in Environment Variable (env vars)
- Docker & Compose are great at this with multiple options.
- Old apps: Use CMD or ENTRYPOINT script with envsubst to pass env vars into conf files.
- Apps shouldn't route or transport logs to anything but stdout/stderr.
- console.log() works.
- Winston/Bunyan/Morgan: use levels to control verbosity.
- Winston transport: "console"
-
Prevent bload and unneeded files.
- .git/
- node_modules/
- npm-debug
- docker-compose-.yml
-
Not needed but useful in Image
- Dockerfiles
- README.md
-
"Traditional App" = Pre-Dcoker App.
-
Take a typical Nod app and "migrates"
-
using ./mta
-
Add .dockerignore
-
Create Dockerfile
-
Change Winston transport to Console.
- See README.md for app details.
- Image shouldn't include in , out, node_modules or logs directories
- Change Winston to Console
- winston.transports.console
- bind-mout in and out dirs
- Set CHARCOAL_FACTOR to 0.1
- Running contaner with ./in and ./out bind-mounts results in new chalk images in -/.out-- on host.
- Changing __env CHARCOAL_FACTOR changes
- Docker logs shows winston outputs.
FROM node:8
RUN apt-get install -y graphicsmagick`
WORKDIR app/
COPY package-.json ./
RUN npm install && npm cache clean __force
COPY . .
# setting environment variables
ENV CHARCOALFACTOR=0.1
CMD ["node", "index.js"]
- Building an images from the docker file
docker build . -t mta
- Running this project using bindmounts
docker run -v ${pwd}/in:/app/in -v $(pwd)/out:/app/out env CHARCOAL_FACTOR=1 mta
- Check inside a running images.
docker exec --it mta
- Check logs of the runnig container.
docker logs all
- Best local setup
- Use Compose features.
- Tips and Tricks.
- cd ./compose-tips.
- Do use docker-compose for local dev
- do use version 2 . for local dev.
- V2 only: depents_on, hardware specific.
Read the yaml in the said folder.
This are te dont's
- Unnecessary: "alias" & "container_name"
- Legacy: "expose" & "links"
- All container in the same networks are exposed to each others.
- Don't set up defaults settings, i.e using network bridge.
- BAD: if bind-mounting folders or files to host, always use relative file
- paths (starting with .). This makes your compose file reusable for others, and won't break if you move your project around.
- Note: everybody will have unique path.
- paths (starting with .). This makes your compose file reusable for others, and won't break if you move your project around.
- BAD: don't bind-mount databases to host OS. You'll get bad performance and many times it won't even work. Best to use named volumes.
- For local dev only? don't copy in code.
- DDforWin needs drive perms
- Perms: Linux != Windows
Bind-Mounting = Perfomance docker-bg-sync
- Problem: we shouldn't build images with node_modules from host.
- Example: node-gyp
- Solution: add node_modules to .dockerignore
- let's do this to sails
- Before you do a docker build make sure you have a .dockerignore file.
- docker build -t sailsbret .
- Problem: we can't bind-mount node_modules content from host on macOs/Windows (diffrent arch)
- Two potential Solutions:
- Never use npm i on host
- run npm i in compose.
- Move modules in image, hide modules from host.
This solution assumes that you will be developing entirely in the container. This is what makes it less flexible.
version: "2.4"
services:
express:
build: .
ports:
- 3000:3000
volumes:
- .:/app
environment:
- DEBUG=sample-express:-
- You can't docker-compose up until you've used docker-compose run
- To install -node-modules using docker compose.
docker-compose run express npm install
- node_modules from the container will be saved in the host.
-
Solution 2, more setup but flexible.
- I could develop, in either the host, or in the container. (What pleases at the time)
-
Move node_modules up a directory in Dockerfile.
- In the container we will use node_modules in the parent directory.
-
use empty volume to hide node_modules on bind-mount.
- This hide the node_modules from the host npm install
-
You can develop on both windows and linux without any side effects of node_modules.
FROM node:10.15-slim
ENV NODE_ENV=production
WORKDIR /node
COPY package.json package-lock-.json ./
RUN npm install && npm cache clean __force
WORKDIR /node/app
COPY . .
CMD ["node", "./bin/www"]
version: "2.4"
services:
express:
build: .
ports:
- 3000:3000
volumes:
- .:/node/app
- /node/app/node_modules
environments:
- DEBUG=sample-express:-
- Two ways to run various tools inside the container.
- docker-compose run: start a new container and run command/shell
- docker-compose exec: run additional command/shell when the container is running
- Use nodemon for compose file monitoring.
- webpack-dev-server, etc. work the same.
- Override Dockerfile via compose command.
- If windows, enable polling.
- create a nodemon.yml for advanced workflows (bower, webpack, parcel)
dockerfile
FROM node:10.15-slim
ENV NODE_ENV=production
WORKDIR /app
COPY package.json package-lock-.json ./
RUN npm install && npm cache clean __force
ENV PATH /app/node_modules/.bin/:$PATH
COPY . .
CMD ["node", "./bin/www"]
docker-compose.yml
version: "2.4"
services:
express:
build: .
command: /app/node_modules/.bin/nodemon ./bin/www
ports:
- 3000:3000
volumes:
- .:/app
environment:
- DEBUG=sample-express:-
- NODE_ENV=development
Notes:
-
NODE_ENV=developments overrides the dockerfiles environments variable which is set to production. This ensures that every thing i need in a dev env in setup correctly.
-
docker-compose file is also used to override the default command to use nodemon.
- Also note that the full path of nodemon from the node_modules is used.
- This is because nodemon is not installed globally.We are using package.json so that we be able to control the version of nodemon.
- Also note that the full path of nodemon from the node_modules is used.
-
Since at the bind-mount we bind the whole app directory. That means the node_modules are shared between the host and the container.
- don't npm install on the host. make use of docker compose.
- **docker-compose run express npm install nodemon **save-dev__
- express is the service name from the docker-compose.yml file.
- **docker-compose run express npm install nodemon **save-dev__
- don't npm install on the host. make use of docker compose.
- problem: Multi-service apps start out of order, node might exit or cylec.
- Multi-container apps needs:
- Dependency awareness
- Name resolution (DN)
- Connection failure handling.
- depends_on: when "up X", start Y first
- Fixes name resolution issues with "can't resolve <service_name>
- Only for compose, not Orchestration.
- Compose YAML v2: works with healthchecks like a "wait for script".
- restart: on-failure
- Good: help slow db startup and Node.js failing. Better: depends_on.
- Bad: could spike Cpu with restart cycling.
- Solution build connection timeouts, buffer and retries in your apps.
-
depends_on: is only dependency control by default.
-
Add v2 healthchecks for true "wait_for"
-
Let's see some examples
- Mongo
- Postgress/MySql
- web
version: "2.4"
services:
frontend:
image: nginx
depends_on:
api:
# this requires a compose file version => 2.3 and < 3.0
condition: service_healthy
api:
image: node:alpine
healthcheck:
test: curl -f http://127.0.0.1
depends_on:
postgres:
condition: service_healthy
mongo:
condition: service_healthy
mysql:
condition: service_healthy
postgres:
image: postgres
environment:
POSTGRES_HOST_AUTH_METHOD: trust
healthcheck:
test: pg_isready -U postgres -h 127.0.0.1
mongo:
image: mongo
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongo localhost:27017/test __quiet
mysql:
image: mysql
healthcheck:
test: mysqladmin ping -h 127.0.0.1
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=true
The above example uses preexisting images to compose this docker-compose file.
- Each container depends on another container.
- To esure all the container are spined up in the correct order. There is an depends_on.
- To esure all the container play nicely together, all of them has a healhcheck
- Problem: many HTTP endpoints, man ports.
- Solution: Nginx/HAProxy/Traefik for host header routing + wildcard localhost domain.
- Problem: CORS failures in dev.
- Solution: Proxy with - header
- Problem: HTTPS locally
- Solution: Create local proxy Certificates.
- Problem: Multiple endpoints an need unique Dns for each. in order of preference.
- Use x.localhost, y.localhost in chrome.
- Use wilcard domains like -.vcap.me or xip.io
- Use dnsmasq on macOS/linux
- Manually edit host file.
- Vs code and other editors have some Docker and compose features built-in
- Debugging works when we enable in nodemon and remote via TCP.
- TypeScript compile and other pre-processors go in nodemon.json.
# a base stage for all future stages
# with only prod dependencies and
# no code yet
FROM node:10-slim as base
ENV NODE_ENV=production
WORKDIR /app
COPY package-.json ./
RUN npm install __only=production \
&& npm cache clean __force
ENV PATH /app/node_modules/.bin:$PATH
# a dev and build-only stage. we don't need to
# copy in code since we bind-mount it
FROM base as dev
ENV NODE_ENV=development
RUN npm install __only=development
# run nodemon. It will use the nodemon.json file for configurations.
CMD ["/app/node_modules/.bin/nodemon"]
FROM dev as build
COPY . .
RUN tsc
# you would also run your tests here
# this only has minimal deps and files
FROM base as prod
COPY __from=build /app/dist/ .
CMD ["node", "app.js"]
node_modules/
version: "2.4"
services:
ts:
build:
context: .
target: dev
ports:
- "3000:3000"
- "9229:9229"
volumes:
- .:/app
Note: we expose port 9229 for node-debugging. Since we are using nodemon. We expose this port using nodemon.json
{
"watch": ["src"],
"ext": "ts",
"ignore": ["src/__/-.spec.ts"],
"exec": "node __inspect=0.0.0.0:9229 -r ts-node/register ./src/app.ts"
}
Since we are configurin typescript. I still have a tsconfig.json
{
"include": [
"src/__/- /-Location of my typscript files-/
],
"compilerOptions": {
"target": "es5",
"module": "commonjs",
"sourceMap": true,
"outDir": "dist",
"strict": true,
"noImplicitAny": false,
"esModuleInterop": true }
}
}
- Take all the learning from this section and apply it to a single compose file!
- Use Docker's example voting app (Dog vs. Cat)
You are the Node.js developer for the "Dog vs. Cat voting app" project. You are given a basic docker-compose.yml and the source code for the "result" Node.js app (sub directory of this dir).
Goal: take the docker-compose.yml in this directory, which uses the docker voting example distributed app, and make it more awesome for local development of the "result" app using all the things you learned in this section.
- Set the compose file version to the latest 2.x (done for you)
- Healthcheck for postgres, taken from the depends_on lecture
- Healthcheck for redis, test command is "redis-cli ping"
- vote service depends on redis service
- result service depends on db service
- worker depends on db and redis services
- remember to add the service_healthy to depends on objects
- result is a node app in subdirectory result. Let's bind-mount that
- result should be built from the Dockerfile in ./result/
- Add a traefik proxy service from proxy lecture example. Have it run on a published port of your choosing and direct vote.localhost and result.localhost to their respective services so you can use Chrome
- Add nodemon to the result service based on file watching lecture. You may need to get nodemon into the result image somehow.
- Enable NODE_ENV=development mode for result
- Enable debug and publish debug port for result
- Edit ./result/server.js, save it, and ensure it restarts
- Ensure you never see "Waiting for db" in docker-compose logs, which happens when vote or result are waiting on db or redis to start
- Use VS Code or another editor with debugger (or Chrome) to connect to debugger
- Goto vote.localhost and result.localhost and ensure you can vote and see result
version: "2.4"
services:
traefik:
image: traefik:1.7-alpine
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8080:80"
command:
- __docker
- __docker.domain=traefik
- __docker.watch
- __api
- __defaultentrypoints=http,https
labels:
- traefik.port=8080
- traefik.frontend.rule=Host:traefik.localhost
networks:
- frontend
- backend
redis:
image: redis:alpine
networks:
- frontend
healthcheck:
test: redis-cli ping
db:
image: postgres:9.6
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
volumes:
- db-data:/var/lib/postgresql/data
networks:
- backend
healthcheck:
test: pg_isready -h 127.0.0.1
vote:
image: bretfisher/examplevotingapp_vote
networks:
- frontend
depends_on:
redis:
condition: service_healthy
labels:
- traefik.port=80
- traefik.frontend.rule=Host:vote.localhost
result:
build:
context: ../result
command: nodemon index.js
volumes:
- ../result:/app
networks:
- backend
depends_on:
db:
condition: service_healthy
labels:
- traefik.port=80
- traefik.frontend.rule=Host:result.localhost
worker:
image: bretfisher/examplevotingapp_worker:java
networks:
- frontend
- backend
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
networks:
frontend:
backend:
volumes:
db-data:
- Avoiding devDependecies In Prod.
- Multi-stage can solve this
- prod stages npm i --only=production
- Dev stage: npm i --only=development
- Use npm ci to speed up builds.
- Ensure NODE_ENV is set.
Dockerfile.
## Stage 1 (production base)
# This gets our prod dependencies installed and out of the way
FROM node:10-alpine as base
EXPOSE 3000
ENV NODE_ENV=production
WORKDIR /opt
COPY package-.json ./
# we use npm ci here so only the package-lock.json file is used
RUN npm ci \
&& npm cache clean __force
## Stage 2 (development)
# we don't COPY in this stage because for dev you'll bind-mount anyway
# this saves time when building locally for dev via docker-compose
FROM base as dev
ENV NODE_ENV=development
ENV PATH=/opt/node_modules/.bin:$PATH
WORKDIR /opt
RUN npm install __only=development
WORKDIR /opt/app
CMD ["nodemon", "./bin/www", "__inspect=0.0.0.0:9229"]
## Stage 3 (copy in source for prod)
# This gets our source code into builder
# this stage starts from the first one and skips dev
FROM base as prod
WORKDIR /opt/app
COPY . .
CMD ["node", "./bin/www"]
docker-compose.yml
version: "2.4"
services:
web:
init: true
build:
context: .
target: dev
ports:
- "3000:3000"
volumes:
- .:/opt/app:delegated
- /opt/app/node_modules
- Document every line that isn't obvious.
- From stage, document why it's needed.
- COPY = don't document.
- RUN = maybe document.
- Add LABELS
- RUN npm config list
- lABEL has OCI standards now.
- LABEL org.opencontainers.image.
- Use ARG to add info to Labels like build date or git commit.
- Docker Hub has built-in envvars for use with ARGs.
FROM node:10
# set this with shell variables at build-time.
# If they aren't set, then not-set will be default.
ARG CREATED_DATE=not-set
ARG SOURCE_COMMIT=not-set
# labels from https://github.com/opencontainers/image-spec/blob/master/annotations.md
LABEL org.opencontainers.image.authors=edwinkamau
LABEL org.opencontainers.image.created=$CREATED_DATE
LABEL org.opencontainers.image.revision=$SOURCE_COMMIT
LABEL org.opencontainers.image.title="Sample Node.js Dockerfile with LABELS"
LABEL org.opencontainers.image.url=https://hub.docker.com/r/bretfisher/jekyll
LABEL org.opencontainers.image.source=https://github.com/BretFisher/udemy-docker-mastery-for-nodejs
LABEL org.opencontainers.image.licenses=MIT
LABEL com.edwin.nodeversion=$NODE_VERSION
WORKDIR /app
COPY index.js .
CMD ["node", "index.js"]
- YAML (unlike Json) support comments!
- Document objects that aren't obvious.
- why a volume is needed.
- Why custom Cmd is needed.
- Template blocks at top.
- Override objects and files.
- RUN npm test in a specific build stage in a multistage dockerfile.
- Also good for linting commands.
- Only run Unit test in build
- Test stage not default.
- Locally, run docker-compose run node npm test
## Stage 1 (production base)
# This gets our prod dependencies installed and out of the way
FROM node:10-alpine as base
EXPOSE 3000
ENV NODE_ENV=production
WORKDIR /opt
COPY package-.json ./
# we use npm ci here so only the package-lock.json file is used
RUN npm config list \
&& npm ci \
&& npm cache clean __force
## Stage 2 (development)
# we don't COPY in this stage because for dev you'll bind-mount anyway
# this saves time when building locally for dev via docker-compose
FROM base as dev
ENV NODE_ENV=development
ENV PATH=/opt/node_modules/.bin:$PATH
WORKDIR /opt
RUN npm install __only=development
WORKDIR /opt/app
CMD ["nodemon", "./bin/www", "__inspect=0.0.0.0:9229"]
## Stage 3 (copy in source)
# This gets our source code into builder for use in next two stages
# It gets its own stage so we don't have to copy twice
# this stage starts from the first one and skips the last two
FROM base as source
WORKDIR /opt/app
COPY . .
## Stage 4 (testing)
# use this in automated CI
# it has prod and dev npm dependencies
# In 18.09 or older builder, this will always run
# In BuildKit, this will be skipped by default
FROM source as test
ENV NODE_ENV=development
ENV PATH=/opt/node_modules/.bin:$PATH
# this copies all dependencies (prod+dev)
COPY __from=dev /opt/node_modules /opt/node_modules
# run linters as part of build
# be sure they are installed with devDependencies
RUN eslint .
# run unit tests as part of build
RUN npm test
# run integration testing with docker-compose later
CMD ["npm", "run", "int-test"]
## Stage 5 (default, production)
# this will run by default if you don't include a target
# it has prod-only dependencies
# In BuildKit, this is skipped for dev and test stages
FROM source as prod
CMD ["node", "./bin/www"]
- build the test image you run the command that targets the stage test docker build -t test __target test .
- Use test stage in multi-stage, or new
- Or run it once image is build with CI.
- Only report at first, no failing (most images have at least one CVE vuln)
- Consider RUN npm audit
## Stage 1 (production base)
# This gets our prod dependencies installed and out of the way
FROM node:10-alpine as base
EXPOSE 3000
ENV NODE_ENV=production
WORKDIR /opt
COPY package-.json ./
# we use npm ci here so only the package-lock.json file is used
RUN npm config list \
&& npm ci \
&& npm cache clean __force
## Stage 2 (development)
# we don't COPY in this stage because for dev you'll bind-mount anyway
# this saves time when building locally for dev via docker-compose
FROM base as dev
ENV NODE_ENV=development
ENV PATH=/opt/node_modules/.bin:$PATH
WORKDIR /opt
RUN npm install __only=development
WORKDIR /opt/app
CMD ["nodemon", "./bin/www", "__inspect=0.0.0.0:9229"]
## Stage 3 (copy in source)
# This gets our source code into builder for use in next two stages
# It gets its own stage so we don't have to copy twice
# this stage starts from the first one and skips the last two
FROM base as source
WORKDIR /opt/app
COPY . .
## Stage 4 (testing)
# use this in automated CI
# it has prod and dev npm dependencies
# In 18.09 or older builder, this will always run
# In BuildKit, this will be skipped by default
FROM source as test
ENV NODE_ENV=development
ENV PATH=/opt/node_modules/.bin:$PATH
# this copies all dependencies (prod+dev)
COPY __from=dev /opt/node_modules /opt/node_modules
# run linters as part of build
# be sure they are installed with devDependencies
RUN eslint .
# run unit tests as part of build
RUN npm test
# run integration testing with docker-compose later
CMD ["npm", "run", "int-test"]
## Stage 5 (security scanning and audit)
FROM test as audit
RUN npm audit
# aqua microscanner, which needs a token for API access
# note this isn't super secret, so we'll use an ARG here
# https://github.com/aquasecurity/microscanner
ARG MICROSCANNER_TOKEN
ADD https://get.aquasec.com/microscanner /
RUN chmod +x /microscanner
RUN apk add __no-cache ca-certificates && update-ca-certificates
RUN /microscanner $MICROSCANNER_TOKEN __continue-on-failure
## Stage 6 (default, production)
# this will run by default if you don't include a target
# it has prod-only dependencies
# In BuildKit, this is skipped for dev and test stages
FROM source as prod
CMD ["node", "./bin/www"]
- To run the audit stage. We use ARG when we want to use variables outside the dockerfile.
docker build -t auditnode --target audit --build-arg MICROSCANNER_TOKEN=\$MICROSCANNER
travis ci , jenkins , Azure devops
- Have CI build images on (some) branches.
- Push to registry once build/test pass.
- Lint Dockerfiles and Compose/Stack Files.
- Use docker-compose run or __exit-code-from for proper exit codes.
- Docker hub can do this.
. <name>:latest is only a convention. . Use latest for local easy acess to current release. . Maybe do this per major branch too for convenience. . Don't repeat tags on ci or servers.
- Always include HEALTHCHECK
- Docker run and docker-compose: infor only.
- Docker Swarm:key for uptime and rolling updates.
- Kubernetes Not used, but help in other making rediness/ liveness probes.
-
Use an existing Node.js sample app.
-
Make a Production grade dockerfiles.
-
Development friendly, testing stage, security, scanning, non root user, labels, minimal prd size.
-
Its better to split the run command into multiple steps.
- It easier for debugging.
You are the Node.js developer for the "Dog vs. Cat voting app" project. You are given a basic Dockerfile and the source code for the "result" Node.js app.
Goal: take the Dockerfile in this directory and make it the ULTIMATE for a combination of local development, production, and testing of the "result" app using all the things you learned in this section.
- Create a multi-stage Dockerfile that supports specific images for production, testing, and development.
- devDependencies should not exist in production image.
- Use
npm ci
to install production dependencies. - Use Scenario 1 for setting up node_modules (the simple version).
- Set NODE_ENV properly for dev and prod.
- The dev stage should run nodemon from devDependencies. Either by
updating the
$PATH
or hard-coding the path to nodemon. - Edit docker-compose.yml to target the dev stage.
- Add LABELS from OCI standard (values are up to you) to all images.
- Add
npm config list
output before runningnpm install
. - Create a test stage that runs
npm audit
. ./tests
directory should only exist in test image.- This is a tricky on to configure.
- This means that you might need an image that does clean-up.
- This is a tricky on to configure.
- Healthchecks should be added for production image.
- Prevent repeating costly commands like npm installs or apt-get.
- Only
COPY . .
source code once, thenCOPY __from
to get it into other stages.
- Add a security scanner to test stage and test it.
- Try using a Dockerfile ARG to add token to microscanner.
- Add Best Practices from earlier section, including:
- Enable BuildKit and try a build.
- Add tini to images so containers will receive shutdown signals.
- Enable the non-root node user for all dev/prod images.
- You might need root user for test or scanning images depending on what you're doing (test and find out!)
- Build all stages as their own tag. ultimatenode:test should be bigger then ultimatenode:prod
- All builds should finish.
- Run dev/test/prod images, and ensure they start as expected.
docker-compose up
should work and you can vote at http://localhost:5000 and see results at http://localhost:5001.- Ensure prod image doesn't have unnecessary files by running
docker run -it <imagename>:prod bash
and checking it:- ls contents of
/app/node_modules/.bin
, it should not containnodemon
or devDependencies. - ls contents of
/app
in prod image, it should not contain./tests
directory.
- ls contents of
- After
docker-compose up
, rundocker-compose exec result ./tests/tests.sh
to perform a functional test across containers. After a moment delay, it should pass.
Good Luck!
- Pre-prod stage (clean up stage.)
- RUN rm -rf ./tests && rm -rf ./node_modules
- This line removes the ./test directory and the ./node_modules.
- ./node_modules must be removed since we don't want dev-modules to copy in.
- The app will be merged with node_modules installed in the base stage.
- Build the development images.
docker build -t ultimatenode:dev --target dev .
- Build the production image.
docker build -t ultimatenode:prod --target prod .
- Start development using docker-compose.
docker-compose up
- Start docker-compose in the background. -
docker-compose up -d
- Enable buildkit. -DOCKER_BUILDKIT=1 docker build -t ultimatenode:dev __target dev .
- Add tini to the Images so containers will receive shutdown signals.
- Refer tini github copy the line.
# Add Tini
ENV TINI_VERSION v0.19.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "__"]
- After tini is added on the base layer. Thats busts the cache we will have to rebuild.
- Run the production image with tini.
- DOCKER_BUILDKIT=1 docker run __init ultimatenode:prod
- Enable the non-root node user for all dev/prod images.
- non-root user don't usually have permissions to low ports.
- low port are recerved for roots user and applications.
- Use ports higher that. change port 80 to 8080
- set the port env variable to 8080. nodejs will pick it up.
- non-root user don't usually have permissions to low ports.
- Running in Production.
- HTTP Proxies.
- When To Orchestate.
- Relacing Running Apps.
- swarm/kubenetes.
- Node is usually single threaded.
- Use multiple replicas, not PM2/forever.
- Start with 1-2 replicas per CPU.
- Unit testing = single replica. Integration testing - multiple replicas.
- Only understands a single server (engine).
- Doesn't understand uptime or healthchecks.
- Swarm is easy and solve most use cases.
- Single server? use swarm
- Kubernetes not ideal for 1-5 servers. Try cloud hosted.
- amazon , digital ocean, azure.
- Common: many HTTPS containers neet to listen on 80/443
- Nginx and HAProxy have lots of options.
- Traefik is the new kid, full of cool feauteres.
- Add SIGTERM code to all Node.js apps.
- Prevents killings app, but not graceful connection migration.
- Check godaddy/ternminus for easier hc + shutdown.
- Shutdown wait defaults: Docker/Swarm: 10s, kubernetes: 30s.
- Kubernetes/Swarm use healthchecks differently for ingress LB.
- Give shutdown waits longer that HTTP long polling.
- HTTP: use stoppables to track open connections.
- Multi-container , single Image.
- Startup "ready" state: healthchecks.
- Multi-container client state sharing (don't use in-memory state)
- Shutdown cleanu: re-connect clients, closde DB, fail readiness (K8s)
- Kubernetes and Swarm-ready version.
- Healthcheck/Readiness wait for DB .
- Readiness re-checks DB connection.
- socket.io uses redis.
- Stoppable for cleanup.
- Example of Node.js app stack.
- Has cluster features under "deploy"
- replicas, update_config
- stop_grace_period.
version: '3.7'
# for more swarm examples see
# https://github.com/BretFisher/dogvscat
#
# NOTE: requires port 80, 443, and 8080 open on Docker host
# Use chrome with hostnames like http://result.localhost
x-default-opts:
&default-opts
logging:
options:
max-size: "1m"
services:
traefik:
<<: *default-opts
image: traefik:1.7-alpine
networks:
- traefik-proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- target: 80
published: 80
protocol: tcp
mode: host
- target: 443
published: 443
protocol: tcp
mode: host
- target: 8080
published: 8080
protocol: tcp
mode: ingress # traefik dashboard
command:
- __docker
- __docker.swarmMode
- __docker.domain=traefik
- __docker.network=traefik-proxy
- __docker.watch
- __api
- __defaultentrypoints=http,https
deploy:
mode: global
placement:
constraints: [node.role == manager]
labels:
- traefik.port=8080
- traefik.frontend.rule=Host:traefik.localhost
vote:
<<: *default-opts
image: bretfisher/examplevotingapp_vote
networks:
- frontend
- traefik-proxy
deploy:
replicas: 2
labels:
- traefik.port=80
- traefik.frontend.rule=Host:vote.localhost
result:
<<: *default-opts
image: bretfisher/examplevotingapp_result:stoppable
networks:
- backend
- traefik-proxy
stop_grace_period: 5m
deploy:
replicas: 2
labels: [APP=VOTING]
update_config:
parallelism: 1
failure_action: rollback
order: start-first
labels:
- traefik.port=80
- traefik.frontend.rule=Host:result.localhost
- traefik.backend.loadbalancer.stickiness=true
worker:
<<: *default-opts
image: bretfisher/examplevotingapp_worker:java
networks:
- frontend
- backend
deploy:
mode: replicated
replicas: 2
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 120s
redis:
<<: *default-opts
image: redis:alpine
command: redis-server __appendonly yes
volumes:
- redis-data:/data
networks:
- frontend
deploy:
endpoint_mode: dnsrr
db:
<<: *default-opts
image: postgres:9.4
volumes:
- db-data:/var/lib/postgresql/data
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
networks:
- backend
deploy:
endpoint_mode: dnsrr
networks:
frontend:
backend:
traefik-proxy:
name: traefik-proxy
volumes:
db-data:
redis-data:
- ARM processors are used everywhere. -embedded devices.
- Amazon have ARM servers.
- But it's hard to develop on ARM.
- develop rasberrypi
- April 2019: Docker + ARM partnership.
- How nasa used docker for rockets testing
- Docker Desktop runs ARM now!
- Node is great on ARM.
- Docker is the easiest way to develop and deploy on arm solutions.
-
Easy button: change the FROM images to arm64v8/node:
-
This forces macOS/Win to run ARM.
-
Use QEMU "proc emulator"
-
Build/run like normal.
-
Mix with x86 in compose.
-
visit dockerhub node page.
- scroll down to supported architecture.
- click on arm64v8/node:10-alpine
- Takes you to the arm based node version.
- click on arm64v8/node:10-alpine
- scroll down to supported architecture.
FROM arm64v8/node:10-alpine
EXPOSE 3000
WORKDIR /usr/src/app
COPY package.json package-lock.json* ./
RUN npm install && npm cache clean __force
COPY . .
CMD [ "node", "./bin/www" ]
- To check the architecture of an image use the command.
- docker image inspect arm64v8/node:10-alpine | grep Arch
- build the images with.
- docker build -t arm64node .
- AWS A1 Instances (Graviton Processors.)
- Testing my Iot/Embedded code.
- Docker Hub doesn't build arm64 images.
- Or does it? (QEMU hack)
- Build your own CI with QEMU.
- Swarm just works!
- ARM+ Docker partnership will make this easier.
- Build multi-arch in one command.
- Store multi-arch images in single repo.
- Easire to know which arch you're running locally.