Support for --ssh flag
cleverer opened this issue · 10 comments
Similarly to #189 it would be real great if --ssh
would be supported. docker/cli#1419
Most probably it would be best to be able to be able to specify a key and known_hosts from Drone secrets.
We are not directly implementing every new shiny docker feature, instead we encourage plugin forks with new features people want to use.
More features, need more testing... And since this plugin is used by nearly any drone instance imho it's more or less some kind of feature freeze.
So you're saying it will never get implemented or not now as it is not yet an established feature?
I do get that it has to be quite stable but also I think this would be quite useful for a lot of people who use drone internally and don't want to make dependencies public.
If I could, I'd already have made a fork and opened a pull request, but unfortunately I can't find any documentation on where to start extending the plugin/it's settings…
I'might however get away by including the necessary steps in my Dockerfile and using build_args_from_env
according to:
https://discourse.drone.io/t/how-can-i-pass-secrets-as-build-args-to-plugins-docker-drone-0-8/824/2.
However I'd rather not have to do this, since build args are not considered secure.
I'm not necessarily opposed to supporting more docker features, however, new feature requests should ideally include a target use case. Why do we need this feature? What does it enable that you cannot do today? Why should we spend our time working on this instead of other things?
I would disagree that people need to make dependencies public to use this plugin to build images. Most people typically retrieve private dependencies in a previous pipeline step, and then use the ADD or COPY directive to add them to the image.
If you must encapsulate all logic inside your Dockerfile then yes, this plugin does not currently meet your needs. If a project has all logic in a Dockerfile, and uses Drone as a mere wrapper, it begs the question whether or not Drone pipelines (as implemented today) provide the optimal solution for this particular use case. I would probably say it does not.
So with that being said, perhaps this is good use case for a custom runner that is optimized for projects that simply want docker build
and docker run
and otherwise find themselves fighting against Drone to support their desired workflow. A custom runner could have a custom yaml that is optimized for this exact workflow and is not necessarily constrained to running inside Docker containers (it could use a Firecracker vm for example). Either way, I do think this highlights on overarching problem that I have been eager to solve for quite some time.
Well, my use case is a Swift project (specifically Vapor), which had private dependencies. (If you wan't to have a look at it, check it out here.)
Swift package manager would work fine by having ssh-auth handled by docker, but I doubt it would work by just copying. Right now I'd had to build the target in drone and then copying the binary to the docker image. This would prevent building the Dockerfile standalone or I'd had to maintain multiple Dockerfiles. It would also further complicate the drone config and the Dockerfile and be prone to errors. I'd have to maintain two sets of build paths which I'd rather avoid and would do the same (I already have a multi-stage docker build).
My idea would be to just supply drone-docker with ssh-keys via a secret and have it build the Dockerfile independently of any previous steps. This would also improve the reproducibilityand reduce complexity.
As you say @bradrydzewski, I'm using drone as a mere wrapper for the docker build, but I think that is kind of the point of a "publishing" step. I don't think that should be a custom runner… but rather the plugin should be able to build any Dockerfile.
I also think that feature is intended for the exact problem I'm facing and would simplify a lot of projects. (If your unsure what to expect, check out: http://blog.oddbit.com/post/2019-02-24-docker-build-learns-about-secr/.) I think it would also lower the entry hurdle for new users and make it easier to create and maintain drone configs and make the whole platform more powerful.
All that being said, I don't think it would be sensible to implement this right now. I'd rather propose to consider doing this when (or if…) it is a feature supported by standard docker build
, without having to specifically enable the behaviour.
Thanks for the reply.
We frequently see people that just want this:
# build image
docker build -t foo .
# run tests
docker run foo
# push image
docker push foo
We often see the above translated into the following, which I consider a huge hack:
kind: pipeline
name: default
steps:
# build the docker image, but do not publish
- name: build
image: plugins/docker
settings:
repo: octocat/hello-world
tags: ${DRONE_COMMIT}
dry_run: true
volumes:
- name: docker
path: /var/run/docker.sock
# run unit or integration tests using the image that was just built
- name: test
image: octocat/hello-world:${DRONE_COMMIT}
commands:
- npm run test
# publish the image after unit tests are complete.
- name: publish
image: plugins/docker
settings:
repo: octocat/hello-world
tags: ${DRONE_COMMIT}
dry_run: false
username:
from_secret: docker_username
password:
from_secret: docker_password
volumes:
- name: docker
path: /var/run/docker.sock
Drone is simply not optimized for this particular workflow. And people do serious gymnastics to get this working with drone, and run into unexpected issues with dangling images, layer caching and trying to mount volumes. Drone is simply not optimized for this kind of workflow.
There are many people that ask why they even need a yaml file and why they cannot just use a Dockerfile. Based on years of support and seeing people struggle to implement this particular workflow, I think a custom runner would be very useful. It would reduce a very common pain point for users. And as an added bonus it would make my life easier from a support perspective 🙂
For example:
kind: pipeline
type: docker-compose
import: path/to/docker-compose.yml
publish:
- octocat/hello-world:foo
- octocat/hello-world:foo-bar
- octocat/hello-world:foo-bar-baz
registry:
username:
from_secret: docker_username
password:
from_secret: docker_password
Above is just an example, but there is something interesting about providing a small wrapper around docker compose for teams that already have a specific workflow that they use locally and they want to re-use with drone.
Or maybe a traditional virtual machine (like firecracker) makes more sense:
kind: pipeline
type: firecracker
image: alpine_with_docker.img
steps:
- name: build
commands:
- docker build -t octocat/hello-world .
- docker run octocat/hello-world npm test
- name: push
commands:
- docker push octocat/hello-world
To come back to the request, looks like it's not just adding another flag to the plugin, it's also starting an ssh-agent, adding a key to the agent, and after all of that append the ssh socket to the docker process... Beside that, what is the minimum version? 19.03?
According to the official docs it is available from 18.09 but requires enabling BuildKit backend and experimental Dockerfile syntax.
came across this issue as we investigate switching from circleci.
using these new features (https://docs.docker.com/develop/develop-images/build_enhancements/) are becoming pretty standard from a best practise POV, so would be great to support them.
I'm not against adding new available flags, just with experimental flags I prefer to use a careful path :)
As far as I can see, the --ssh
and --secret
flag now work with docker by default, so I think they are not experimental anymore.
Has the plugin been updated to docker 18.09 since #189 (comment) ?
Any chance this (or #189) get implemented at some point?
Maybe even as a separate buildkit drone-plugin?