Kubernetes vulnerability dashboard
tallclair opened this issue · 21 comments
Should we have a central place where kubernetes security vulnerabilities are published? Our current announcement template includes a lot of mailing lists, and discuss.kubernetes.io. We also usually file a github issue with more details on the vulnerability.
Some ideas include:
- (current process) just use kubernetes-security-announce history
- Create a security bulletin on the website (something similar to GKE security bulletins or AWS security bulletins)
- Use github
- File github issues with a dedicated label
- File github issues in a dedicated repo
- Leverage GitHub security advisories?
/help
@tallclair:
This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
In response to this:
Should we have a central place where kubernetes security vulnerabilities are published? Our current announcement template includes a lot of mailing lists, and discuss.kubernetes.io. We also usually file a github issue with more details on the vulnerability.
Some ideas include:
- (current process) just use kubernetes-security-announce history
- Create a security bulletin on the website (something similar to GKE security bulletins or AWS security bulletins)
- Use github
- File github issues with a dedicated label
- File github issues in a dedicated repo
- Leverage GitHub security advisories?
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
A few notes on github security advisories:
- Require repo admin privileges to create
- I think advisories cannot be edited or amended once published
- There is an atom feed (and API) associated with a repo's security advisories, so they could suffice as a dashboard & feed
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
/cc @justaugustus
An approach to a vulnerability dashboard that we discussed: add a new kind/cve
label, and ensure that CVE patches get a release note along the lines of kubernetes/kubernetes#92941:
CVE-2020-8559 (Medium): Privilege escalation from compromised node to cluster. See kubernetes/kubernetes#92914 for more details.
Then, leverage https://relnotes.k8s.io/ filtered by kind/cve
as the vulnerability dashboard.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
I'd like to either:
- hyperlink to that content from https://k8s.io/
- transclude the same content into a page on https://k8s.io/
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-contributor-experience at kubernetes/community.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle stale
/reopen
@sbs2001: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
An alternative approach would be to do something like Istio project does. Eg they have markdown documents with embedded yaml like this. Then these are rendered by frontend like this .
IMHO the advantage of this approach is that users can know whether they are affected and how to fix the issue all in one page.
FWIW I've been converting the security announcements to yaml format as part of ismyk8ssecure. See advisories in particular. It contains the CVE and a list of versions of the particular kubernetes component which is vulnerable to it. Adding the fixes to these documents will be trivial. Using these documents creating a bulletin would be fairly easy.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.