docker/for-mac

Support for sharing unix sockets

BouncyLlama opened this issue Β· 133 comments

Expected behavior

When mounting a directory containing unix sockets the sockets should function the same as they do on a Linux host.

Actual behavior

The socket is 'there', but non-functional.

Information

After reading several forum threads, it appears that there is a workaround with socat over TCP, but this is rather slow.

The documentation has this to say: 'Socket files and named pipes only transmit between containers and between OS X processes -- no transmission across the hypervisor is supported, yet'
Hopefully this is a planned feature already, but I did not see any existing issues open in this tracker for this particular issue, although it relates to #410 which asks specifically for SSH_AUTH_SOCK to be supported.

Host OS: Mac OSX 10.10.5

Steps to reproduce the behavior

  1. mount a directory containing unix sockets like so: '-v "/directorywithsockets:/otherdirectory"'
  2. attempt to send data to/from the host/container via the socket

This is on the roadmap.

Interestingly, even a socket created in the container is non-functional if it was created on a mounted volume of the host.

jippi commented

Any ETA on this? :)

Currently it's blocking hashicorp/nomad#1091 and other projects :)

This is currently scheduled for resolution in November. Sorry for the delay.

jippi commented

@dsheets okay, will it be available in beta builds beforehand?

Our goal is to ship it in beta builds in November.

jippi commented

amazing @dsheets - if you need any testing or otherwise beforehand, I'm willing to help out!

Our goal is to ship it in beta builds in November.

@dsheets That statement makes me think the work for this is already underway. However, the status label is still "1-acknowledged". Is this currently being worked on, or still on the todo list?

We are in November now :) Any guess when this will be in Beta? This feature blocks a lot for us.

Any updates? Even a mention that this feature has been abandoned would be better than nothing.

The feature has not been abandoned. It is still on the roadmap. Thanks for your patience.

It is the last day of November. Any update on when this will be in the beta @dsheets?

Work has begun but is currently delayed behind performance work. Sorry about that. Stay subscribed to get updates. Thanks for your patience!

Any updates on this?

I've started using https://github.com/avsm/docker-ssh-agent-forward to work around this issue

Any updates @dsheets ? Is there an issue tracking the performance work? If that is going to block for much longer, I am planning to spend some time cleaning up https://github.com/avsm/docker-ssh-agent-forward but if docker will have proper support soon I won't bother.

+1

Support for this is planned and has been started but is delayed indefinitely behind other work. @wysenynja I would recommend cleaning up avsm/docker-ssh-agent-forward in the meantime.

The docker-ssh-agent-forward workaround is working well for me, I've raised avsm/docker-ssh-agent-forward#9 with some fixes.

Keeping fingers crossed for this fix tho.

I've merged a bunch of changes from avsm's helpful fix into a new fork that is on docker hub: https://github.com/uber-common/docker-ssh-agent-forward

It's nearly the end of March... any update on this work? I actually need to use this for something that isn't SSH.

@twexler we are looking at the Q2 roadmap now and this support is contending with other work (mostly performance related) for priority. If you (or anyone else receiving this) have use cases that aren't X11 or SSH (these are great but more use cases are always better), we'd really, really love to hear about them to help make the case to prioritize this work. Thanks!

Here's one use case. We're considering using UNIX sockets so that a process running in the Docker container can notify another process running in the host system that it is already listening on some specific TCP port, without a need for periodic polling of that port. This works great on Linux, but not in our development environment on macOS. From my perspective, feature parity between the development and production environment is more important than performance.

It's incredibly frustrating to be able to share the socket in linux boxes but not on my dev machine. Feature parity is definitely more important than performance for me as well, after all this is one of the greatest use cases for docker: run the same thing no matter the underlying OS.

My use case is reaching Postgres(or MySQL, doesn't matter) via a socket. Doesn't make sense to have it listen on 0.0.0.0 just so containers can access it.

Thank you, I have updated our tracking issue with the mixed host-container use case and dev/prod parity requirement. I've also added the database socket use case. I think these should be compelling enough to (finally) get this shipped. Thank you for your patience and sorry it has taken this long. πŸ™‚

Hope you managed to realize a good chunk of the performance improvements you prioritized. Stoked you're getting around to this.

Thank you for your patience and remaining polite throughout a year of our nagging πŸ†

Use Case:
Forwarding Mac host network packets to a container for analysis and intrusion Detection.

Issues:
using named pipes (e.g. something.fifo) to share data for my container is a workaround for me. For my project, I would really need to be able to share host network interfaces with the container on Docker for Mac, like in the Linux version of Docker. This is to be used for reading network packets for an IDS. Since this is not achievable currently, and it doesn't seem that Docker will support sharing host interfaces anytime soon, I decided to try using named pipes to dump network packets in through TCPDump. this obviously did not work and why I am here now to share my use case.

Current Objective:
Since it seems I have a better chance of Docker supporting named pipes, I am arranging the usage of my project for Mac around the apparent use of named pipes to share network packets with my container instance.

That being said, I would much prefer to use --net=host πŸ˜„

Mange commented

I got this to work using socat as a workaround for now. I start a socket-to-tcp proxy on the host, expose the port to the container, and then start a tcp-to-socket proxy inside the container.

Wrapper script on host:

image_name=...

socat TCP-LISTEN:3434,reuseaddr,fork "UNIX-CLIENT:$SSH_AUTH_SOCK" &
socat_pid=$!
trap "kill -- $socat_pid" EXIT

docker run \
  --interactive --tty --rm \
  --network=host \
  -p 3434 \
  $image_name \
  bin/container-ssh-wrapper "$@"

Then in the container wrapper script:

# Create UNIX socket to port 3434 to the host, where the SSH agent is proxied.
export SSH_AUTH_SOCK="$HOME/ssh-agent.socket"
socat UNIX-LISTEN:$SSH_AUTH_SOCK,fork TCP:localhost:3434 &

exec "$@"

This seems to work well. It will not support running multiple instances at the same time because of the fixed port number, but that should be very easy to fix if you have that requirement. You can determine a free port number on the host and pass it as an environment variable to the container.

Both Ubuntu and Homebrew on Mac OS X have socat in their repositories. My wrapper script automatically installs them on the host if not already installed, so user setup is very low.

Looking forward to having this native, but this might help some people deal with it until then. πŸ˜„

We use FIFOs for high-speed message-passing between two dockers. Performance-wise, on a linux box it demolished using stdin/stdout with --log-driver=none, and was around 30% faster than unix domain sockets.

Agreed @shenberg! FIFOs are preferable over UNIX socks!

FIFOs do not solve the same problems as unix sockets:

UNIX domain sockets and FIFO may share some part of their implementation but they are conceptually very different. FIFO functions at a very low level. One process writes bytes into the pipe and another one reads from it. A UNIX domain socket has the same behaviour than a TCP/IP socket.

A socket is bidirectional and can be used by a lot of processes simultaneously. A process can accept many connections on the same socket and attend several clients simultaneously. The kernel delivers a new file descriptor each time connect(2) or accept(2) is called on the socket. The packets will always go to the right process. On a FIFO, this would be impossible. For bidirectional communication, you need two FIFOs, and you need a pair of FIFOs for each of your clients. There is no way of writing or reading in a selective way, because they are a much more primitive way to communicate.

Interfacing with existing processes such as ssh-agent which use unix sockets can't easily be solved by simply using FIFOs instead. Therefore, the need surfaced by this issue is still valid.

openssh can forward unix sockets. would this help as part of a workaround?

Any update on this?

+1

@automaticgiant : There was a workaround using this method posted in #410 in this comment. The need for full unix socket support is still valid, as this workaround is just that: a workaround.

Any update? This really blocking the full potential of docker for mac...

+1 for sharing unix sockets

+1

Is there a workaround for this that works in the build step? The methods involving socat only work with docker run to use the --net and -p features, meaning it won't work during docker build.

Any updates?

Moment for a fun fact: this is stopping Docker DOOM from working on a macOS host.

Ok @docker, no DOOM is sad, amongst other things that we could be doing with this fix... Time to fix this issue on MacOS!

What say you?

+1 for a common way to share the ssh agent
This is my desperate approach after I spent half day trying everything, maybe it can help someone
https://gist.github.com/KernelFolla/a6b5150ca6187cc3222923ce53f19084
I bypass the entry point to start a new ssh agent.

@dsheets It's almost the end of august now...what's up with this work? do you guys need some help?

@dsheets I'm not a big fan of "ping" messages (like the one I'm writing now), but since the last update was in March, can you provide us with any update on this issue?

this issue is a roadblock for us as we try to provide a common image for all developers on shared VMs. we never want a SSH key to appear anywhere so we use ssh agents to host the keys in memory and never store them on disk. this makes it impossible work around the problem by mounting private keys in a container and then starting another ssh-agent in the container.

@Mange 's workaround with socat seems viable for the time being - but I really don't want my code to be riddled with if uname == Darwin clauses.. so πŸ‘ on a final solution!

Is there an actual ETA for this? It's a year after beta fixes were promised, and this will have some impact on whether I bother to get a Mac for my next system.

quinn commented

This is currently inhibiting my ability to test something that depends on a mysql socket locally. This would great to have this feature fixed on os x....

In case you don't want to / can't use socat...
Following up on @Mange 's workaround here is a solution without using socat, only pure netcat:

On the host:

mkfifo myfifo
nc -lk 12345 <myfifo | nc -U $SSH_AUTH_SOCK >myfifo

On the container:

mkfifo myfifo
while true; do
  nc docker.for.mac.localhost 12345 <myfifo | nc -Ul /tmp/ssh-agent.sock >myfifo
done &

export SSH_AUTH_SOCK=/tmp/ssh-agent.sock

ssh ...

This bypasses the socket connection over tcp. Since it is netcat only you have to re-establish the connection manually after the pipe has disconnected.

Any updates on this issue?
Ran into it today and it took a while to find this issue...

They just closed the problem without solving it.
No issue no problem :)

... unix socket still does not work on macOS!
Issue opened two years and five months ago!

If socket mounting is not supported then a PR that calls this out when the user attempts to do so can save us all time and effort. A PR that calls out this exact issue in github would also force more attention.

Anyone have time to submit a PR? Clearly there must be a larger issue under the hood that keeps this issue from being resolved. Surely a warning or nag is better than blind optimism and wasted time?

fbsb commented

Too bad osxfs and docker-for-mac are not open-source 😞

dhull commented

Regarding @domdom82 's netcat solution, I found the for the container side to reliably work I had to add the -k option to the second netcat invocation on the container side:

nc docker.for.mac.localhost 12345 <myfifo | nc -Ulk /tmp/ssh-agent.sock >myfifo

@dsheets

Here's a use case for UNIX domain sockets and containers that leverages UNIX domain sockets and containers together in a unique and IMHO way cool, security-improving way.

UNIX domain sockets allow the server side to authenticate the client. No other type of socket or IPC allows this. This is an amazing way to pass secrets (credentials) into containers - except the Mac under Docker :-(.

See https://www.peerlyst.com/posts/sharing-secrets-with-containers-using-custodia-alan-robertson and https://www.peerlyst.com/posts/the-authproxy-method-of-sharing-secrets-safely-with-containers-alan-robertson for reasons why this is an important security feature.

FWIW: Custodia is a key part of Red Hat's identity management solution. The authproxy method is much simpler and easily implemented.

@jpcope It's not the mount that can detect the problem - as the most reliable way to do that is to mount the directory which contains the socket. There's nothing special about the directory. If you mount the directory containing the socket, then the application in the host can crash and restart (creating a new socket inode) and not have to restart all the containers which might want to use it.

Any updates on this? Can't wait to see this feature. It is the only reason that stops my team for using Docker for development and maybe later also in production!

Here's my use case:

I'm using Google Cloud SQL in a project. They provide a proxy that creates a socket that allows communication with the server.

I run the proxy on my host, and can connect through the socket. But when I try to connect through the socket within a container, that fails (of course).

My only solution right now is to create the socket within the container itself, but because this project is using multiple containers, that means I have to create it multiple times, which is not ideal.

Adding my voice to the others.
I am evaluating nginx/unit for future use.
The official unit docker image presents a unix socket for control.
Given that I do not work with sockets on a daily basis it took me ages to figure out that it was docker not working and not my invocation of curl/socat/nc.

Maybe if this is not going to work on MacOS it could be made more obvious that it is not working?
Not sure how, maybe documentation (I searched but could not find reference to this bug other than this github issue.) maybe just prevent directories containing sockets from being exposed on MacOS?

I agree about making it more obvious. A simple os.Stat on the source of the volume mount could check if it's a socket and then direct the user to here. Would probably help avoid duplicate tickets as well.

@grimmy That would catch a small percentage of the cases. The more robust way to share a socket is to share the directory it's in -- or will be in - as I noted in a comment above. Personally, I'd rather see them put the effort into fixing the darn thing.

@Alan-R hmm I actually haven't tried mounting the parent directory. If that works though that would fix my issue trying to get the auth socket for ssh-agent into a container.

mounting socket or dir won't work for mac, because unix socket is kind of in os memory, but docker and mac is not the same kernel. you'll get the socket on the other side, but it's not "connected" because logically speaking those sockets are in two different machines.

Of course it won't work. That's the whole point of this issue ;-).

The point is that it works perfectly on Linux - and should also work on MacOS. Docker is not a VM. If that's how MacOS implemented it (as a VM), then that's broken - and it's not Docker at all. I would imagine that dozens of things would be broken if they implemented a VM for each Docker instance. I suppose they could implement a single VM for all the Docker instances.

@Alan-R Docker for Mac is implemented with a single VM for all Docker instances. That's necessary because Docker is meant to run on a Linux kernel, but macOS uses a Mach kernel.

Correct, that's how it is. One single VM. See https://docs.docker.com/docker-for-mac/docker-toolbox/#the-docker-for-mac-environment

Still a VM and therefore not the same Kernel as the Host.

So, it sounds like this isn't possible, and will remain in it's current state (broken/unfixed) forever?

It's not the data transmission between the OSes that I care about, it's the socket calls to authenticate who the user is, and then to look in /proc to get the cgroup and then do a docker inspect. It sounds like all that stuff will never work - given that architecture. Docker processes show up on the host. I haven't looked, but I assume they don't on MacOS.

I'm curious why @dsheets stated it's on the roadmap if it's actually impossible to do.

I'm pretty sure it is possible. You can still forward sockets per netcat or socat. Somebody just has to look into it.

fbsb commented

Theoretically it's possible to share sockets between the mac host and the docker vm (hyperkit) as it works with the docker socket itself.

The difference between the docker socket and user mounted sockets is that the docker socket gets mounted via the vpnkit by hyperkit while volumes are ultimately shared to the vm by the osxfs (which has not been opensourced yet).

For this to work, osxfs has to detect whether a volume is a socket and connect it via a network socket inside the machine (just like socat works).

As a workaround one could run socat inside a container connecting a unix socket to a network port and then on the host another socat would connect the host unix socket to the container port. While this is very ugly and error prone it might work.

For reference:

https://docs.docker.com/docker-for-mac/osxfs/#file-types

Symlinks, hardlinks, socket files, named pipes, regular files, and directories are supported. Socket files and named pipes only transmit between containers and between macOS processes -- no transmission across the hypervisor is supported, yet. Character and block device files are not supported.

@udondan That would be insecure and would not support the system calls to identify the caller on the other end. It would eliminate the main advantage of UNIX domain sockets over named pipes.

On Linux, you can get the pid, uid, gid, and security context of your caller. On other BSD systems, you can get most of those things. On this implementation, those things aren't meaningful. So, you can no longer identify who is sending you messages.

On a Linux system, you can also get the cgroup from /proc and use it to figure out which container the caller was coming from, and then do a docker inspect on that container. You can't do that here.

So, full compatibility would be impossible without reimplementing Docker directly on the host OS. This is possible, but a huge amount of work. Containers don't care about the host OS - except for the system calls - which is pretty limited compared to the whole system. Docker, on the other hand, needs things like cgroups and much more OS capability.

The workaround I suppose is to create your sockets in other containers, and run whatever application you had in mind in a container - maybe one with full OS privileges and the Linux host /proc mounted directly on the container /proc (or something like that). An ugly hack that still might not work on MacOS.

Oh yes, sure, it does. Most popular solution probably is https://github.com/uber-common/docker-ssh-agent-forward

@udondan I agree you could get data forwarding working without a huge trouble (which I said earlier). But getting these other OS calls working and giving meaningful results would be impossible - because the process space and UID and GID space is separate between the host and the docker instances.

Unfortunately true. But it's the best we have at the moment. 😿

I would go further and say it's the best that there ever will be without a completely new implementation of Docker that doesn't involve a VM.

The best workaround is to just start a Linux VM on your Mac and do everything in your VM and forget MacOS. Alternatively, move all your host work into a Docker container, and figure out how to get access to all the Docker VM resources you need - which is kind of the half-way version of the first alternative.

tvon commented

Two things I think would be useful in this discussion:

  1. What the community should expect as far as an upstream solution. Roadmaps change, priories change, that is the nature of the business. Is this still on a roadmap, or is it expected for an upcoming release, or has it been de-prioritized and unlikely to show up anytime soon in a release?
  2. Any technical information on the core issue. Is this a HyperKit limitation? A Hypervisor Framework limitation? Is a technical solution even possible on the OSS side of Docker for Mac? Is there a project that this is built upon that has discussion relevant to this issue?

So... I found something interesting in this for my use case (sharing docker.sock for local development, not really concerned about the security of the socket), but I have no idea what the implications are of this.

On the OS X host, /var/run/docker.sock is owned by root:daemon, and when you run a container with -v /var/run/docker.sock:/var/run/docker.sock, docker.sock is owned by root:root.

I was looking into trying something with socat to forward this into the container (which I don't fully understand, either), and just for grins I did the following

docker run -v /var/run/docker.sock:/var/run/docker.sock -it jpetazzo/dind /bin/bash
# docker.sock is owned by root:root at this point
chown root:daemon /var/run/docker.sock
usermod -Ums /bin/bash -G daemon tom
su tom
docker run hello-world

That seemed to work. Running docker in OS X still worked after exiting the container, and on subsequent docker runs the permission change persisted until I restarted docker for mac.

I don't really know why this works, or if I'm going to hose some part of my system, but it worked for me, I can now run docker on the host and in a container.

Docker for mac Version 18.03.1-ce-mac65 (24312)
OS X 10.13.4

So are Unix sockets supported by docker eventually (host: macOs, container: linux)?

You can do some things (described above) which allow the two to inter-operate if all you need is data connectivity. But, if you need full Linux sockets APIs (like I described earlier) - that is unlikely to ever happen.

It seems that the most prevalent use case for this feature is for developers to develop and test containers locally on a single-user MacOS laptop. In this case, the connectivity is the main feature needed while the extra security & user identification provided natively on Linux are probably less useful. Most common examples being:

  • Mounting Docker socket for launching other containers & interacting with Docker daemon from a container
  • Mounting SSH Agent Socket for SSH connections from a container

Correct me if I'm wrong, but I've never heard of any company using a production environment running containers hosted on macOS Server. Therefore security features are lower priority given that the developer will usually trust themselves, or at least take responsibility if they do things that accidentally mess up the host system somehow. In this case, the "customer" (e.g. developers) for the feature really just wants working sockets through host volume mounts to work just like on Linux. It could be that some use cases for using sockets that require these extra features will not work, but at least basic connectivity is satisfied.

I agree with @trinitronx about why people use this and why the security features don't
matter to most people - unless like me, they're testing and developing security sensitive software which needs this feature.

Given that the Docker OS environment and the MacOS environment share nothing but the contents of regular files, and have no pid space in common, and even have different maximum values for process ids, sharing that kind of security information would require a very radical and costly rearchitecture of the MacOS docker implementation, or a switch of MacOS to become Linux-based. IMHO, neither seems very likely - exactly for the reasons @trinitronx stated. Although the former is somewhat more likely than the latter ;-).

lox commented

One of the primary reasons we have Docker for Mac is to test and develop Docker workflows that will run on Linux environments. Without parity around how sockets are handled, it's very difficult to work with any applications that mount in an ssh-agent, or the myriad of other use cases folks above have described. The alternative is developing custom alternate implementations for when the host is macOS, which kind of devalues some of the core benefits of docker and docker-compose based workflows.

mounting socket or dir won't work for mac, because unix socket is kind of in os memory, but docker and mac is not the same kernel. you'll get the socket on the other side, but it's not "connected" because logically speaking those sockets are in two different machines.

How is that fundamentally different from the stdin, stdout, and stderr pipes? I'd like to be able to use named pipes, which should in theory work almost the exact same as the process standard pipes in a hypervisor environment. Can someone explain to me why docker on MacOS supports stdout/stderr/stdin, but not named pipes? Is there some critical difference I'm missing?

kler commented

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale comment.
Stale issues will be closed after an additional 30d of inactivity.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle stale

href commented

/remove-lifecycle stale

http://collabnix.com/how-docker-for-mac-works-under-the-hood/

Docker for Mac does not use docker-machine to provision its VM. The Docker Engine API is exposed on a socket available to the Mac host at /var/run/docker.sock. This is the default location Docker and Docker Compose clients use to connect to the Docker daemon, so you to use docker and docker-compose CLI commands on your Mac.

So.. if it can expose that socket, how come it can't handle SSH_AUTH_SOCK ?

@bryanhuntesl said:

So.. if it can expose that socket, how come it can't handle SSH_AUTH_SOCK ?

Because the docker socket is being exposed by a daemon running under macOS. The challenge with SSH_AUTH_SOCK is that it needs to be exposed inside of a running container, which is actually running inside of a VM. So, the socket would be caught by the VM's kernel, instead of the macOS kernel. It's like trying to mount a socket from one machine to another via NFS. The socket file exists, but its metadata doesn't make any sense to the kernel on the remote machine.

@wysenynja writes

It looks like docker 2 added β€œdocker build β€”ssh AUTHSOCK”

I couldn't find this in any release notes. Can you share a link to this new feature?

I think this is it... I had been looking at docker/for-mac release notes.

docker/cli#1014

docker/cli#1014 is something else (though it may also be useful for the people in this thread).

I was talking about https://medium.com/@tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066

@bryanhuntesl said:

So.. if it can expose that socket, how come it can't handle SSH_AUTH_SOCK ?

Because the docker socket is being exposed by a daemon running under macOS. The challenge with SSH_AUTH_SOCK is that it needs to be exposed inside of a running container, which is actually running inside of a VM. So, the socket would be caught by the VM's kernel, instead of the macOS kernel. It's like trying to mount a socket from one machine to another via NFS. The socket file exists, but its metadata doesn't make any sense to the kernel on the remote machine.

Thanks for clarifying.

docker/cli#1014 is something else (though it may also be useful for the people in this thread).

Is it possible to connect to the docker4mac host vm using this ? (export DOCKER_HOST=ssh://localhost:????), then it should be possible to use ssh agent forwarding to share the osx based agent with a container ?

I am very very very interested in a solution for this. I have been hardening my security practices for the last couple years and this is one of the last issue with my setup.

I cannot copy my ssh private key nor share it through a volume (I use gpg-agent with ssh-agent mode and the corresponding gpg private key is safely stored on a write-only yubikey) therefore most workarounds to share my ssh credentials with docker containers are unapplicable for me.

Since it is also impossible to map a USB device to a container (as far as my research indicates) I was forced to create a less secure ssh key to be used in containers which is not so nice and means I have abandoned docker for some use cases (managing my ansible environment for instance).

Is this the right issue for tracking cross-hypervisor FIFO support? Or is this issue only for Unix sockets and their special features, and plain text streams need their own issue?

It seems like plain streams have a relatively successful workaround of "just use a network connection instead". Is that going to be (or has it been) rolled in as a workaround in Docker for Mac itself, or should I spend time implementing it in my project?

Does anybody know if Hashicorp Nomad's found a way around this? I'm running a nomad dev agent locally on a mac that runs a Docker job but can't get the logs corresponding to this job through Nomad.

The error message I get is:
Error reading file: Unexpected response code: 500 (log entry for task "<taskname>" and log type "stdout" not found)