openbmc/libmctp

Difference between demux daemon implementation and kernel implementation

Closed this issue · 1 comments

Hello, Firstly, this is a question not an issue and secondly i am very new to this.

What is the difference between demux daemon implementation and mctp kernel implementation.

I understand demux daemon is used to de-multiplex between various application protocol for example PLDM and uses AF_UNIX sockets whereas kernel implementation uses AF_MCTP kernel sockets. So what exactly is the difference between these two implementation wise?

Also does one have a performance leverage ove the other?

Hi,

What is the difference between demux daemon implementation and mctp kernel implementation.

Largely the trade-offs involved:

  1. The advantage of a userspace implementation is that it's easier to write and experiment with, and from a defensive perspective is safer than an in-kernel implementation (bugs kill or compromise the process rather than the kernel).
  2. The disadvantage of a userspace implementation is that we must poke UAPIs out of the kernel so that it's possible to implement the transport bindings in userspace

The disadvantage of 2 is significant. The DMTF specify many MCTP transport bindings, and all of them would require some low-level userspace interface to implement correctly. It happens to be the case that we have UAPIs for the serial and astlpc transports implemented in libmctp and exposed via the mctp-demux-daemon, but others such as I2C, I3C and USB are much more problematic. Further, it's less of a technical problem than a social one, as justifying the changes adding capable UAPIs for these buses will run into some serious opposition upstream.

Really, the problem should be solved in the kernel. Further, by doing so, we get a really tidy, fully-featured socket-based interface, and that's pretty satisfying.

More information on the kernel-based solution can be found here

Also does one have a performance leverage ove the other?

The in-kernel solution has less IPC involved, so you could make an argument that architecturally it has a performance advantage. Answering whether that matters for your use-cases will require profiling.

Firstly, this is a question not an issue and secondly i am very new to this.

No worries. Broadly we try to keep issues to bugs and feature requests. Questions are best directed either to the OpenBMC mailing list. You can sign up here and we have an archive on lore.kernel.org.