[requirement] Add a Dind container able to build the documentation using mkdocs
cmoulliard opened this issue · 7 comments
Description
When we try to build the technical documentation using mkdocs, then the plugin-techdocs-backend
reports the following error
2022-03-03T14:32:58.733Z techdocs error Failed to build the docs page: Failed to generate docs from ${inputDir} into ${outputDir}; caused by Error: This operation requires Docker. Docker does not appear to be available. Docker.ping() failed with; caused by Error: connect ENOENT /var/run/docker.sock Failed to generate docs from ${inputDir} into ${outputDir}; caused by Error: This operation requires Docker. Docker does not appear to be available. Docker.ping() failed with; caused by Error: connect ENOENT /var/run/docker.sock type=plugin name=Error cause=Error: This operation requires Docker. Docker does not appear to be available. Docker.ping() failed with; caused by Error: connect ENOENT /var/run/docker.sock stack=Error: Failed to generate docs from ${inputDir} into ${outputDir}; caused by Error: This operation requires Docker. Docker does not appear to be available. Docker.ping() failed with; caused by Error: connect ENOENT /var/run/docker.sock
at _TechdocsGenerator.run (/app/node_modules/@backstage/techdocs-common/dist/index.cjs.js:312:13)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async DocsBuilder.build (/app/node_modules/@backstage/plugin-techdocs-backend/dist/index.cjs.js:136:5)
at async DocsSynchronizer.doSync (/app/node_modules/@backstage/plugin-techdocs-backend/dist/index.cjs.js:214:23)
at async /app/node_modules/@backstage/plugin-techdocs-backend/dist/index.cjs.js:305:7
as backstage backend is running as a pod within a cluster and has no access to a container engine such as docker.
Proposition
Add a container to the deployment YAML of backstage able to run Docker in Docker - dind-rootless
Example:
spec:
template:
spec:
containers:
- command:
- dockerd
- --host
- tcp://127.0.0.1:2375
image: registry.harbor.10.0.77.176.nip.io:32443/tap/dind-rootless
imagePullPolicy: IfNotPresent
name: dind-daemon
resources: {}
securityContext:
privileged: true
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp
name: tmp
- mountPath: /output
name: output
- name: backstage
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
volumeMounts:
- mountPath: /tmp
name: tmp
- mountPath: /output
name: output
volumes:
- emptyDir: {}
name: tmp
- emptyDir: {}
name: output
Have you tried to generate the documentation locally? More info here: https://backstage.io/docs/features/techdocs/getting-started#disabling-docker-in-docker-situation-optional
Have you tried to generate the documentation locally? More info here: https://backstage.io/docs/features/techdocs/getting-started#disabling-docker-in-docker-situation-optional
Such an option will only work (I think) if you add the needed techdocs packages within the container image that you build yourself. This is doable.
As this is not easy for a user to know what it should included part of the backend image to be used according to a plugin x.y.z, I think that we should help them and that the backstage project offers a nice way to generate for docker/kubernetes world a Makefile or bash script or ... able to generate the proper image and tools/packages/... according to the plugins selected, etc
Questions:
- Should we then part of this helm chart project propose a Dockerfile extending the one from the backstage - backend project to build the image of the front and backend including the needed tools (mkdocs, sqlite when postgresl is not enabled, ...) to help the user when they will use the chart ?
- Another option would be to use the backstage image https://hub.docker.com/r/spotify/techdocs and to mount it as a separate container sharing with the backend a persistent or empty folder
I experimented the approach where a 2nd container running mkdocs is mounted but that the local
build is failing
2022-07-18T15:24:34.669Z techdocs error Failed to build the docs page: Failed to generate docs from /tmp/backstage-S89f8g into /tmp/techdocs-tmp-1YXNOi; caused by Error: spawn mkdocs ENOE
at _TechdocsGenerator.run (/app/node_modules/@backstage/plugin-techdocs-node/dist/index.cjs.js:423:13)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async DocsBuilder.build (/app/node_modules/@backstage/plugin-techdocs-backend/dist/index.cjs.js:137:5)
What I changed within the Deployment resource of the helm chart -->
spec:
replicas: 1
selector:
matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
app.kubernetes.io/component: backstage
template:
...
spec:
volumes:
...
- emptyDir: { }
name: tmp
- emptyDir: { }
name: output
{{- if .Values.backstage.image.pullSecrets }}
imagePullSecrets:
{{- range .Values.backstage.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
containers:
- command: ["sh", "-c", "tail -f /dev/null"]
image: spotify/techdocs
name: techdocs
volumeMounts:
- mountPath: /tmp
name: tmp
- mountPath: /output
name: output
- name: backstage-backend
image: {{ include "backstage.image" . }}
imagePullPolicy: {{ .Values.backstage.image.pullPolicy | quote -}}
...
{{- if .Values.backstage.extraAppConfig }}
volumeMounts:
{{- range .Values.backstage.extraAppConfig }}
- name: {{ .configMapRef }}
mountPath: "/app/{{ .filename }}"
subPath: {{ .filename }}
{{- end }}
- name: tmp
mountPath: /tmp
- name: output
mountPath: /output
{{- end }}
I tested successfully the local
approach where I added the mkdocs python package and openjdk to the backend dockerfile
FROM node:16-bullseye-slim
WORKDIR /app
# install sqlite3 dependencies, you can skip this if you don't use sqlite3 in the image
RUN apt-get update && \
# apt-get install -y --no-install-recommends libsqlite3-dev python3 build-essential && \
apt-get install -y --no-install-recommends libsqlite3-dev python3 build-essential pip curl openjdk-11-jdk graphviz fontconfig && \
rm -rf /var/lib/apt/lists/* && \
yarn config set python /usr/bin/python3
# Download plantuml file, Validate checksum & Move plantuml file
RUN curl -o plantuml.jar -L http://sourceforge.net/projects/plantuml/files/plantuml.1.2022.4.jar/download && echo "246d1ed561ebbcac14b2798b45712a9d018024c0 plantuml.jar" | sha1sum -c - && mv plantuml.jar /opt/plantuml.jar
# Install the mkdocs python package
RUN pip install mkdocs-techdocs-core==1.0.2
# Create script to call plantuml.jar from a location in path
RUN echo $'#!/bin/sh\n\njava -jar '/opt/plantuml.jar' ${@}' >> /usr/local/bin/plantuml
RUN chmod 755 /usr/local/bin/plantuml
...
I would scope this out from the Helm Chart, as the local option seems to be the way to go. Tagging @iamEAP as he could provide more guidance on this.
When running docker in this way (local generation), adding those dependencies to the image running the backend itself is what we typically suggest.