How to maintain contirubted images
repeatedly opened this issue ยท 13 comments
We have contributed images and pending PR for new destination.
Currently, we have a problem for the maintenance.
- The repository authors, we, don't know the knowledge of some destinations, e.g. logzio, stackdriver, kinesis, so we can't reply the answer to some issues and questions. In addition, we can't judge the patch is correct or not for these destinations.
- Merging patch and releasing new images for contributed images are delayed because we need to wait the patch from contributors. This is not good for users.
- Some contributed images are now not maintained. There are several reasons: The original author has no response for issues, the account is changed or deleted, etc. Honestly, we want to purge such images from this repository...
We want to know how to avoid these problems. One idea is adding contributors to this repository but it increases security concern like backdoored images. Need to add only trusted users.
Any feedback is welcome.
Hi @repeatedly it seems like there are three categories of images:
- opensource projects, like elasticsearch, kafka or just fluentd (forwarder)
- destinations from big players and huge userbases (like stackdriver or cloudwatch)
- other destinations, like papertrail, logzio or (still waiting) logsense ๐
I'd keep the first category.
Dropping 2 and 3 will make things easier to maintain for sure but will also create a barrier for newcomers.
The 2nd category is tricky because people are interested in using them but unfortunately companies may not be interested in maintaining all the projects out there (this repo included).
I'd keep them for the sake of making things easier for new users.
The least tricky is in my opinion the 3rd category - maybe there should be a policy that other destinations need to have at least two maintainers?
If then nobody is interested in resolving issues for 3rd party proprietary destinations then I think it is fair to just drop them.
I do understand why one might think about dropping 2 and 3 altogether. In such case we (@logsenseapp) will just maintain our own forked repo.
The least tricky is in my opinion the 3rd category - maybe there should be a policy that other destinations need to have at least two maintainers?
If then nobody is interested in resolving issues for 3rd party proprietary destinations then I think it is fair to just drop them.
Yeah, if the active/trusted maintainers are available, we can support new images. Add new image is easy but remove image is hard. We don't want to avoid current problems for new images.
Actually, we started to use this repo for the 1st category.
Now, we would like to contribute an image that would use rdkafka instead of the ruby-kafka plug-in as the latter doesn't play well with Kerberos Authentification to a kafka cluster. The rdkafka image would need some extra packages for kerberos and gssapi support. Is this still a good candidate part of the 1st category?
We would like to have this image maintained by the community so people can contribute bugfixes, but I wouldn't want to impose on the maintainer of this repo.
@repeatedly As someone who just submitted a new destination (category 2) I think having a single repo that is well maintained is worth a lot. If everyone creates their own forks just to add any work in category 2 or 3 then you end with a lot of friction on where images are hosted, where to file issues if it pertains the new destination as well as the upstream etc.
In my case, I contributed #411 and will also maintain it as it's something we're actively using and need to keep up-to-date. But we won't be able to fork this repo (due to security restrictions on the GH org) and don't have a publicly facing registry for other users to consume this contribution. Hence, I'm asking what are the requirements for "new/trusted maintainers"?
what are the requirements for "new/trusted maintainers"?
Hmm... I think here are 2 approaches:
- Add 2 or more people as a maintainer
- Add organization backed person as a maintainer
We want to avoid no maintainer case.
@repeatedly Understood. In my case, it's a company-backed contribution. What do you suggest as a "proof" for the 2 approaches?
I agree with @elsesiy , currently AWS points to this project. As seen Here and Here . Dropping 2 & 3 seems quite disruptive.
I'd really like to make a plugin that supports CloudWatch & S3 in one deployment, which fluentd is fully capable of. This helps reduce cost for noisy debug logs that you only want to go to s3 with more pertinent logs to cloudwatch.
@repeatedly Can you please get back to me on what kind of proof is needed? You've seen me contributing to this repo frequently already but I really need to finish up #411 since it's causing unnecessary overhead to maintain this on our end, thanks!
@repeatedly same for me. I really miss the functionality from #411 and it would cause a lot of overhead for the company I work for.
@repeatedly I've updated the PR but still like to run a small test before we merge it. I'm running a modified version of the underlying plugin internally so I just want to confirm again that it works the way it is now, thanks
Perhaps don't use in-process output plugins? Instead, each output plugin is a separate process, that runs as a sidecar. This could be patterned after the fluentd-forward plugin.
I was thinking, similar to Adam, my initial thought when I found this repo was: can I add plugins to the basic fluentd docker image?
I do like the community maintained k8s configs but I think it would be easier to maintain all of this stuff if there were a simple-ish way to tell the vanilla fluentd image "look in this folder" or "install this set" of plugins (or use a sidecar). That way the community would only have to maintain the daemonset (and give some example of configurations.)
This might not work for some subset of users but it would basically remove the need for such diverse maintenance.