GoogleCloudPlatform/continuous-deployment-on-kubernetes

Can't use docker on slaves

philippefuentes opened this issue · 6 comments

I followed the last update on the tuto (installed jenkins using Helm), the difference is I use a git repo to pull my project.

I'd like to start with running docker on the slaves to push images on my GKE repo (like I used to do with the first version of the tuto).

I use the latest proposed jenkinsfile with a docker builder but I have this error

+ docker build -f ./source/Dockerfile_PREPROD -t gcr.io/vpc-hosting-150517/titane/tit-client:preprod.40 ./source
time="2019-01-30T09:21:35Z" level=error msg="failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial unix /var/run/docker.sock: connect: no such file or directory"
context canceled

Here is the jenkinsfile used:

def project = 'titane'
def appName = 'tit-client'
def imageTag = "gcr.io/vpc-hosting-150517/${project}/${appName}:${env.BRANCH_NAME}.${env.BUILD_NUMBER}"

pipeline {
  agent {
    kubernetes {
      label 'titane-client'
      defaultContainer 'jnlp'
      yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
  component: ci
spec:
  # Use service account that can deploy to all namespaces
  serviceAccountName: cd-jenkins
  containers:
  - name: gcloud
    image: gcr.io/cloud-builders/docker
    command:
    - cat
    tty: true
"""
}
  }
  stages {
    stage('Build and push image with Container Builder') {
      when { branch 'preprod' }
      steps {
        container('gcloud') {
           sh "docker build -f ./source/Dockerfile_PREPROD -t ${imageTag} ./source"
        }
      }
    }
  }
}

here is the full log:

Started by user admin
 > git rev-parse --is-inside-work-tree # timeout=10
Setting origin to https://github.com/vetup/titane-client
 > git config remote.origin.url https://github.com/vetup/titane-client # timeout=10
Fetching origin...
Fetching upstream changes from origin
 > git --version # timeout=10
 > git config --get remote.origin.url # timeout=10
using GIT_ASKPASS to set credentials github-pfuentes
 > git fetch --tags --progress origin +refs/heads/*:refs/remotes/origin/*
Seen branch in repository origin/full-firebase
Seen branch in repository origin/master
Seen branch in repository origin/preprod
Seen 3 remote branches
Obtained Jenkinsfile-docker from c2be26655d3452e1c9a81d990c91e632000f1044
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Still waiting to schedule task
‘titane-client-p4h9c-hnh7m’ is offline
Agent titane-client-p4h9c-hnh7m is provisioned from template Kubernetes Pod Template
Agent specification [Kubernetes Pod Template] (titane-client): 

Running on titane-client-p4h9c-hnh7m in /home/jenkins/workspace/titane-client_preprod
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Checkout SCM)
[Pipeline] checkout
Cloning the remote Git repository
Cloning with configured refspecs honoured and without tags
Cloning repository https://github.com/vetup/titane-client
 > git init /home/jenkins/workspace/titane-client_preprod # timeout=10
Fetching upstream changes from https://github.com/vetup/titane-client
 > git --version # timeout=10
using GIT_ASKPASS to set credentials github-pfuentes
 > git fetch --no-tags --progress https://github.com/vetup/titane-client +refs/heads/*:refs/remotes/origin/*
Fetching without tags
 > git config remote.origin.url https://github.com/vetup/titane-client # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/vetup/titane-client # timeout=10
Fetching upstream changes from https://github.com/vetup/titane-client
using GIT_ASKPASS to set credentials github-pfuentes
 > git fetch --no-tags --progress https://github.com/vetup/titane-client +refs/heads/*:refs/remotes/origin/*
Checking out Revision c2be26655d3452e1c9a81d990c91e632000f1044 (preprod)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f c2be26655d3452e1c9a81d990c91e632000f1044
Commit message: "test jenkinsfile"
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] container
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Build and push image with Container Builder)
[Pipeline] container
[Pipeline] {
[Pipeline] sh
 > git rev-list --no-walk 386434c1c7c9b72bb13b8e5ee09c2f395798bfb3 # timeout=10
+ docker build -f ./source/Dockerfile_PREPROD -t gcr.io/vpc-hosting-150517/titane/tit-client:preprod.40 ./source
time="2019-01-30T09:21:35Z" level=error msg="failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial unix /var/run/docker.sock: connect: no such file or directory"
context canceled
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE

Thank you in advance
Philippe

In the latest tutorial the Docker socket is no longer being mounted in to the agents. For building the images you should use:

Cloud Build:
https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes/blob/master/sample-app/Jenkinsfile#L54

or

Kaniko:
https://github.com/jenkinsci/kubernetes-plugin/blob/master/examples/kaniko-declarative.groovy

I tried using below one which works fine.

https://github.com/jenkinsci/kubernetes-plugin/blob/master/examples/docker.groovy

@viglesiasce any reason why one should not use this.

Mounting the docker socket exposes the underlying node and possibly the entire cluster to any build that runs because Docker is generally running as root these days. This should only be done in situations where you fully trust the CI workload and the folks who can alter its contents.

sneko commented

Hi @viglesiasce ,

I just migrated from GCE to GKE and I'm no longer able to bind host docker into agent (it says during docker pull that there is a fault segmentation (core dumped)).

Now you tell that's a pretty bad practice, I'm looking for a way to embed Docker into my build, any advice? In my case depending on the stages I used different docker images (build/test/...) using this:

        stage('Build') {
            agent {
                docker {
                    image "${builderImage}"
                    reuseNode true
                }
            }

What could simulate this behavior? Installing a docker client into my jnlp image? But is it a good practice also?

Thank you,

Hey @sneko!

You would have to create a mount of the docker socket and docker binary from the GKE node to the Jenkins agent pod. Here is an example:

https://github.com/jenkinsci/kubernetes-pipeline-plugin/blob/master/kubernetes-steps/readme.md#using-host-path-mounts

sneko commented

Hi @viglesiasce ,

As you mentioned last year #140 (comment) it's a pretty bad practice to mount from the daemon from the node. Why do you advise this today, any change ^^?

On my side I started using https://github.com/genuinetools/img that uses same CLI commands than docker and doesn't require all the stuff the official Docker CLI needs. I'm really happy with it.

So to sum up, I run a container "img" for the steps that need to build some Dockerfiles.

During my research I also saw https://github.com/GoogleContainerTools/kaniko from Google, but I don't know, read a lot about both and I just made a choice.

I hope it will help someone!