ossf/package-analysis

Sandbox failed (error starting container: exit status 125)

rodion-gudz opened this issue · 9 comments

I have error with default installation

Command

scripts/run_analysis.sh -ecosystem pypi -package Django

-----------------------------------------
Package Details
Ecosystem:                pypi
Package:                  Django
Version:                  
Location:                 remote
-----------------------------------------
Analysing package

2023-03-13T10:57:57.754Z        INFO    worker/logging.go:63    Got request     {"ecosystem": "pypi", "name": "Django", "version": "", "package_path": "", "results_bucket_override": ""}
2023-03-13T10:57:57.786Z        INFO    analyze/main.go:199     Starting static analysis
2023-03-13T10:57:57.786Z        DEBUG   sandbox/init.go:71      Creating bridge network
2023-03-13T10:57:57.794Z        DEBUG   sandbox/init.go:31      Loading iptable rules
2023-03-13T10:57:57.797Z        INFO    worker/runstatic.go:35  Running static analysis {"tasks": ["all"]}
2023-03-13T10:57:57.797Z        DEBUG   sandbox/sandbox.go:240  podman  {"args": ["--cgroup-manager=cgroupfs", "--events-backend=file", "pull", "gcr.io/ossf-malware-analysis/static-analysis:latest"]}
2023-03-13T10:57:58.730Z        DEBUG   sandbox/sandbox.go:240  podman  {"args": ["--cgroup-manager=cgroupfs", "--events-backend=file", "image", "prune", "-f"]}
2023-03-13T10:57:58.860Z        DEBUG   sandbox/sandbox.go:240  podman  {"args": ["--cgroup-manager=cgroupfs", "--events-backend=file", "create", "--runtime=/usr/local/bin/runsc_compat.sh", "--init", "--dns=8.8.8.8", "--dns=8.8.4.4", "--dns-search=.", "--network=analysis-net", "-v", "/results.json:/results.json", "gcr.io/ossf-malware-analysis/static-analysis:latest"]}
2023-03-13T10:57:59.194Z        DEBUG   sandbox/sandbox.go:240  podman  {"args": ["--cgroup-manager=cgroupfs", "--events-backend=file", "start", "--runtime=/usr/local/bin/runsc_compat.sh", "--runtime-flag=root=/var/run/runsc", "--runtime-flag=debug-log=/tmp/sandbox_logs_3054061372/runsc.log.%COMMAND%", "2eba113554b38dbf232e29d446f3e7c18cfd2831ca721daf37e20143649610d3"]}
2023-03-13T10:57:59.258Z        WARN    log/writer.go:63        time="2023-03-13T10:57:59Z" level=warning msg="Couldn't run auplink before unmount /var/lib/containers/storage/aufs/mnt/d1d267d24ebbf46d29baa0a5a8eec260674db6b43604655a7595aa5d63e5391b: exec: \"auplink\": executable file not found in $PATH"    {"args": ["/usr/local/bin/staticanalyze", "-ecosystem", "pypi", "-package", "django", "-version", "4.1.7", "-analyses", "all", "-output", "/results.json"]}
github.com/ossf/package-analysis/internal/log.WriteTo
        /src/internal/log/writer.go:63
github.com/ossf/package-analysis/internal/log.Writer.func1
        /src/internal/log/writer.go:40
2023-03-13T10:57:59.449Z        WARN    log/writer.go:63        Error: unable to start container "2eba113554b38dbf232e29d446f3e7c18cfd2831ca721daf37e20143649610d3": error mounting storage for container 2eba113554b38dbf232e29d446f3e7c18cfd2831ca721daf37e20143649610d3: error creating aufs mount to /var/lib/containers/storage/aufs/mnt/d1d267d24ebbf46d29baa0a5a8eec260674db6b43604655a7595aa5d63e5391b: invalid argument    {"args": ["/usr/local/bin/staticanalyze", "-ecosystem", "pypi", "-package", "django", "-version", "4.1.7", "-analyses", "all", "-output", "/results.json"]}
github.com/ossf/package-analysis/internal/log.WriteTo
        /src/internal/log/writer.go:63
github.com/ossf/package-analysis/internal/log.Writer.func1
        /src/internal/log/writer.go:40
2023-03-13T10:57:59.451Z        DEBUG   sandbox/sandbox.go:240  podman  {"args": ["--cgroup-manager=cgroupfs", "--events-backend=file", "stop", "-t=5", "-i", "2eba113554b38dbf232e29d446f3e7c18cfd2831ca721daf37e20143649610d3"]}
2023-03-13T10:57:59.581Z        DEBUG   sandbox/sandbox.go:240  podman  {"args": ["--cgroup-manager=cgroupfs", "--events-backend=file", "rm", "--all", "--force"]}
2023-03-13T10:57:59.914Z        FATAL   analyze/main.go:128     Static analysis aborted {"error": "sandbox failed (error starting container: exit status 125)"}
main.staticAnalysis
        /src/cmd/analyze/main.go:128
main.main
        /src/cmd/analyze/main.go:200
runtime.main
        /usr/local/go/src/runtime/proc.go:250

-----------------------------------------
Analysis failed

docker process exited with code 1

Ecosystem:                pypi
Package:                  Django
Version:                  
Location:                 remote
-----------------------------------------```

System information:
20.04.5 LTS (Focal Fossa)
Docker version 20.10.23+azure-2, build 715524332ff91d0f9ec5ab2ec95f051456ed1dba
go version go1.20.1 linux/amd64

I failed to replicate this issue, however I suspect it is to do with the filesystem backing the outer Docker container or /var/lib/containers.

Can you run docker info and paste the result here?

@calebbrown

Can you run docker info and paste the result here?

 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc., 0.10.3+azure-1)
  compose: Docker Compose (Docker Inc., 2.16.0+azure-2)

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 7
 Server Version: 20.10.23+azure-2
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: false
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f
 runc version: 5fd4c4d144137e991c4acebb2146ab1483a97925
 init version: 
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-1104-azure
 Operating System: Ubuntu 20.04.5 LTS (containerized)
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 7.771GiB
 Name: codespaces-66375d
 ID: ZDOU:ZHXV:T3AW:ABQ7:4UIJ:5GSI:BS2P:TR4I:WYAB:67JQ:2QGD:LHC7
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Username: codespacesdev
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support```

Thanks!

I notice that you a running this on Azure via GitHub's Codespaces - is that correct?

Can you also paste in here the output from this command:

docker run --privileged -ti \
    -v /var/lib/containers:/var/lib/containers \
    --rm --entrypoint=/bin/sh gcr.io/ossf-malware-analysis/analysis \
    -c 'buildah info'

Assuming that you're attempting to run this in Codespaces, I run buildah info from the above command a got this output:

{
    "host": {
        "CgroupVersion": "v1",
        "Distribution": {
            "distribution": "ubuntu",
            "version": "22.04"
        },
        "MemFree": 162275328,
        "MemTotal": 4123181056,
        "OCIRuntime": "crun",
        "SwapFree": 0,
        "SwapTotal": 0,
        "arch": "amd64",
        "cpus": 2,
        "hostname": "6cce56644467",
        "kernel": "5.4.0-1104-azure",
        "os": "linux",
        "rootless": false,
        "uptime": "1h 4m 45.1s (Approximately 0.04 days)"
    },
    "store": {
        "ContainerStore": {
            "number": 0
        },
        "GraphDriverName": "aufs",
        "GraphOptions": null,
        "GraphRoot": "/var/lib/containers/storage",
        "GraphStatus": {
            "Backing Filesystem": "overlayfs",
            "Dirperm1 Supported": "false",
            "Dirs": "0",
            "Root Dir": "/var/lib/containers/storage/aufs"
        },
        "ImageStore": {
            "number": 0
        },
        "RunRoot": "/run/containers/storage"
    }
}

The issue is GraphDriverName is aufs which is likely because the backing filesystem is overlayfs as it appears as though the terminal in Codespaces is itself a Docker container (so a container-inside-container setup).

Running mount shows:

$ mount
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/...,xino=off)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,relatime,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
/dev/sdb1 on /usr/sbin/docker-init type ext4 (ro,relatime,discard)
/dev/sda1 on /tmp type ext4 (rw,relatime)
/dev/sdb1 on /vscode type ext4 (rw,relatime,discard)
/dev/loop0 on /workspaces type ext4 (rw,nodev,relatime)
/dev/sdb1 on /.codespaces/bin type ext4 (rw,relatime,discard)
/dev/loop0 on /etc/resolv.conf type ext4 (rw,nodev,relatime)
/dev/loop0 on /etc/hostname type ext4 (rw,nodev,relatime)
/dev/loop0 on /etc/hosts type ext4 (rw,nodev,relatime)
/dev/loop0 on /home/vscode/.minikube type ext4 (rw,nodev,relatime)
/dev/sdb1 on /workspaces/.codespaces/shared type ext4 (rw,relatime,discard)
/dev/loop0 on /workspaces/.codespaces/.persistedshare type ext4 (rw,nodev,relatime)
/dev/loop0 on /var/lib/docker type ext4 (rw,nodev,relatime)
none on /sys/kernel/security type securityfs (rw,relatime)

I suspect using one of the ext4 mounts (e.g. /tmp) in place of /var/lib/containers on the root FS (overlay2) will work.

Can confirm that changing the container mount worked.

I ran the following commands in GitHub Codespaces and was able to succeed in running the analysis:

mkdir /tmp/containers
docker run --privileged -ti \
    -v /tmp/package-analysis/results:/results -v /tmp/containers:/var/lib/containers \
    gcr.io/ossf-malware-analysis/analysis analyze \
    -package pypi -ecosystem django  \
    -upload file:///results

@calebbrown is there anything we can improve on our end?

I think we should update the run_analysis.sh script to:

  1. support codespaces
  2. if not in codespaces, alert the user and given them help

Detecting the underlying filesystem can be done using a command like: findmnt -T /var/lib -n -o FSTYPE (note /var/lib is almost certainly going to exist, but /var/lib/containers may not exist).

Detecting codespaces can be done by looking at the hostname and username. The hostname is prefixed with codespaces- and the username is codespace (it could be codespacesdev too).

Finally, an environment variable or flag can be used to specify an override.