WebAssembly/wasi-sockets

Emulate dualstack sockets in userspace

badeend opened this issue · 6 comments

Dualstack support per platform according to the internet: (Not verified)

  • OpenBSD: Not supported.
  • FreeBSD: Not supported by default, but support can be manually toggled with a system-wide config change.
  • Windows: Supported. Disabled by default. Can be enabled on a per-socket basis.
  • Linux: Supported. Generally enabled by default, but some distros disable it by default.
  • POSIX & RFC3493: Specifies IPv6 sockets should be dualstack by default.

On the *BSD's: Can dualstack sockets be emulated efficiently and transparently in userspace by creating two sockets?

@badeend in your assessment, do you think this needs to be part of an MVP? Given that not all OSs support this, maybe we can punt on it and let the emulation happen entirely in userspace, even above libc?

I suggest saying it's "nondeterministic" whether dual-stack is enabled by default or not. This effectively represents the reality most portable socket code today faces, where dual-stack support may or may not be enabled in ways that are outside the application's control or visibility.

Beyond that, it'd be nice (but not necessarily critical for an MVP) to have a way to set IPV6_V6ONLY.

@tschneidereit You're correct in that it doesn't need to be part of the MVP. It currently isn't.

@sunfishcode That's certainly the most pragmatic option. Let me start by saying that I too am on the fence on this one.

The things that made me swing slightly towards IPv6-only by default versus just choosing the underlying platform's default:

  • In the latter case you'd also need a way to query IPV6_V6ONLY on a socket.
  • It would open the WASI API up to the idiosyncrasies of the underlying platforms. Unlike for example case sensitivity of the file system, this can be abstracted away.
  • Ideologically, dualstack sockets are just a compatibility hack. I think it's reasonable to say hacks should be opt-in, rather than opt-out when possible. As mentioned in the explainer, this opt-in feature can be added post-MVP or sooner if you'd prefer.
  • Either way, the interface keeps its promise. You ask for an IPv6 socket; you get an IPv6 socket.

That also sounds reasonable to me.

In #13 I've added the option to get and set the ipv6-only option. In the documentation of the method I've clarified that dual-stack support is not required from the host.

The underlying commercial operating systems use "prefer IPv6, fail-over to IPv4 by default". The current security best practices with many security teams is "prefer IPv4, fail-over to IPv4 by default". So how does this impact existing software?