kappa-db/multifeed

Community Question: Should multifeed use decentstack under the hood?

telamon opened this issue · 1 comments

Fellow multifeeders, I'm sorry to bring about this awkward question but
I'd like to know if you are for or against merging the experimental branch into master.
Please comment!

The reason i'm asking is because a couple of months ago I forked multifeed's replication manager into a stand-alone project.
I belive my motivations were rational; I found myself in a situation where
I had a multifeed and a corestore that needed to replicate over a single peer connection.

Which led me to the idea of an external replication-manager, that decouples 'Storage' from 'Replication.
The experiment was successfull and I found that decentstack can serve a purpose on it's own, but I would like to continue working with the community.

So what I'm trying to say is:

As a developer I don't want to maintain 2 similar replication managers,
so in order to easily backport new features and to unify our
infrastructure I want to use decentstack under multifeed's hood.

Comparison table: (what will be new if merged)

Feature Multifeed Decentstack
Storage built-in pluggable choice
Exchange Protocol v2 v3
Hypercore-protocol v6 v7
Hypercore support v7 v8
Replication control - middleware api
License ISC AGPLv3
Internal code style ES5 ES6

The result after the merge will be that multifeed will still use it's own
storage code but feed exchange and replication will be off-loaded to the internal
stack instance.

Those of you having multifeed in your dependency lists, the change should
simply require you to upgrade your dependencies for multifeed and hypercore:v8,
and then you should have access to the new stack-functionality as well.

I think this is an exciting change that will let us discover new patterns, especially
in the replication control area.

Having said that, I will in no way be offended nor discouraged if any of you
disagree, but I want to at least let the elephant out of the room :)

I've already spoken to @telamon about this on IRC, but I'll try to summarize my feelings here as well:

Multifeed was designed to do two things: manage a set of hypercores, and provide a replication scheme that lets two peers exchange data from their hypercores with each other. I think it'd be useful to have a mechanism for choosing which subsets of cores to share and receive, and I think this can be done with minimal API surface area changes, and without having multifeed depend on a larger framework for it.

One idea that's being floated is to change multifeed's storage to use a common multi-hypercore storage api (e.g. https://github.com/noffle/corestorage) and have multifeed just become a replication manager -- one of many that folx in the dat ecosystem could use. This would reduce multifeed's surface area considerably, and make it pretty simple to substitute it for decentstack or a pubsub approach, for example. There's also experiments being done on kappa-core that are related to this. I'm not totally sure where things are going just yet, but I feel like I want to see how some of these experiments shake out before making significant changes to multifeed.

Thanks @telamon for all of your patience & efforts around this 💙