horusec-cli from within Kubernetes DinD container consistently reports no issues
ccureau opened this issue · 8 comments
What happened:
A Kubernetes pod with a simple Docker-in-Docker setup fails to report vulnerabilities from either a docker-based scan or the horusec-cli
tool. This happens regardless of the state of the -D
flag being set to true
or false
.
What you expected to happen:
When I run this on a Mac with Docker Desktop installed, the scan reports vulnerabilities.
How to reproduce it (as minimally and precisely as possible):
- Create a deployment with a
docker
anddocker-socket
container
apiVersion: v1
kind: Pod
metadata:
labels:
run: horusec
name: horusec
spec:
containers:
- image: docker:20.10.11
name: docker
command:
- sleep
args:
- '99d'
resources: {}
volumeMounts:
- name: varrun
mountPath: /var/run
- image: docker:20.10.11-dind
securityContext:
privileged: true
name: docker-daemon
resources: {}
volumeMounts:
- name: varrun
mountPath: /var/run
volumes:
- name: varrun
emptyDir: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
- in the
docker
container, execute a docker scan against some code
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v ${PWD}:/workspace horuszup/horusec-cli:v2.6 horusec start '-p=/workspace/' '-P=${PWD}' '-o=sonarqube' '-O=/workspace/horusec.json'
Anything else we need to know?:
Regardless of whether I run a docker-based scan or a scan from the horusec-cli
tool (both with and without -D="true"
) the scan always reports no vulnerabilities found
Environment: Kubernetes (1.18+)
- Horusec version (use
horusec version
): v2.6.7 - Operating System: Linux
- Network plugin / Tool and version (if this is a network-related / tool bug):
- Others:
Hi @ccureau, thank you for the bug report.
We are going to check what happened and return as soon as possible.
I did some fast tests to see what is happening, seems to be a problem with a invalid mount paths inside the dind container.
Using the direct binary with the -D
I managed to run without errors and found some vulnerabilities. With this flag all tools that Horusec orchestrate will not run, leading to a analysis less assertive. Here you can check more infomation about the Horusec engine.
Until a fix is released, downloading the binary and running without docker horusec start -D
will work.
giving more context on this problem:
- When running on dind container inside a k8s cluster as mentioned above we've got this error:
time="2022-02-07T18:35:05Z" level=error msg=" Error while running tool Trivy: trivy config cmd: Error response from daemon: invalid mount config for type \"bind\": invalid mount path: '${PWD}/.horusec/9736a216-203e-4ced-8360-3063a2d9c9f3' mount path must be absolute"
- i believe this must be related to the way we're mounting the volumes on our containers in this line
i tried to change the propagation settings to see if the error continues and unfortunately it continued, so i believe the error is on our mount.Type param but changing this will not be so easy and probably will generate some breaking change or newer flags for cases like this.
What you guys think about? @matheusalcantarazup @nathanmartinszup @wiliansilvazup we should make a new flag and a new feature to 2.8 milestone to solve this?
so i believe the error is on our mount.Type
Can we certify this before making any progress on this fix?
with more research i realize i was going for something too complicated and it's was easy to fix, i've runned this command
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v ${PWD}:/workspace horuszup/horusec-cli:v2.7 horusec start '-p=/workspace/' '-P=/run/test' '-o=sonarqube' '-O=/workspace/horusec.json' --log-level=debug
and was able to run the containers and get the vulnerabilities.
The problem was probably you were getting permission denied when calling $PWD
in the start command argument and it returned a invalid path.
Steps to reproduce:
- install kind tool
kind create cluster --name horusec
- create a config file
apiVersion: v1
kind: Pod
metadata:
labels:
run: horusec
name: horusec
spec:
containers:
- image: docker:20.10.12
name: docker
command:
- sleep
args:
- '99d'
resources: {}
volumeMounts:
- name: varrun
mountPath: /var/run
- image: docker:20.10.11-dind
securityContext:
privileged: true
name: docker-daemon
resources: {}
volumeMounts:
- name: varrun
mountPath: /var/run
volumes:
- name: varrun
emptyDir: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
- apply it to cluster
kubectl apply -f ./config.yaml
- exec into docker container
exec kubectl exec -i -t -n default horusec -c docker "--" sh -c "clear; (bash || ash || sh)"
- once inside a container create directory
mkdir /run/test
- cd to that dir
cd /run/test
- install git on container
apk add git
- clone examples repository
git clone https://github.com/ZupIT/horusec-examples-vulnerabilities
- run the command
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v ${PWD}:/workspace horuszup/horusec-cli:v2.7 horusec start '-p=/workspace/' '-P=/run/test' '-o=sonarqube' '-O=/workspace/horusec.json' --log-level=debug
ps: remember that using this argument below could cause some security problems, so make sure you really know what you doing there
securityContext:
privileged: true
@nathanmartinszup @wiliansilvazup @matheusalcantarazup can you guys reproduce it?
with more research i realize i was going for something too complicated and it's was easy to fix, i've runned this command
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v ${PWD}:/workspace horuszup/horusec-cli:v2.7 horusec start '-p=/workspace/' '-P=/run/test' '-o=sonarqube' '-O=/workspace/horusec.json' --log-level=debug
and was able to run the containers and get the vulnerabilities.The problem was probably you were getting permission denied when calling
$PWD
in the start command argument and it returned a invalid path.Steps to reproduce:
- install kind tool
kind create cluster --name horusec
- create a config file
apiVersion: v1 kind: Pod metadata: labels: run: horusec name: horusec spec: containers: - image: docker:20.10.12 name: docker command: - sleep args: - '99d' resources: {} volumeMounts: - name: varrun mountPath: /var/run - image: docker:20.10.11-dind securityContext: privileged: true name: docker-daemon resources: {} volumeMounts: - name: varrun mountPath: /var/run volumes: - name: varrun emptyDir: {} dnsPolicy: ClusterFirst restartPolicy: Always
- apply it to cluster
kubectl apply -f ./config.yaml
- exec into docker container
exec kubectl exec -i -t -n default horusec -c docker "--" sh -c "clear; (bash || ash || sh)"
- once inside a container create directory
mkdir /run/test
- cd to that dir
cd /run/test
- install git on container
apk add git
- clone examples repository
git clone https://github.com/ZupIT/horusec-examples-vulnerabilities
- run the command
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v ${PWD}:/workspace horuszup/horusec-cli:v2.7 horusec start '-p=/workspace/' '-P=/run/test' '-o=sonarqube' '-O=/workspace/horusec.json' --log-level=debug
ps: remember that using this argument below could cause some security problems, so make sure you really know what you doing there
securityContext: privileged: true@nathanmartinszup @wiliansilvazup @matheusalcantarazup can you guys reproduce it?
It worked for me.
It worked for me too :)
closing since there was no response and two people could reproduce without error