security vulnerability for csi-provisioner:v3.5.0
Jainbrt opened this issue · 5 comments
What happened:
registry.k8s.io/sig-storage/csi-provisioner:v3.5.0
CVE SEV CVSS PACKAGE VERSION TYPE STATUS PATH
--- --- ---- ------- ------- ---- ------ ----
CVE-2023-29404 C 9.8 go 1.20.3 app fixed in 1.20.5, 1.19.10 /csi-provisioner
CVE-2023-24539 H 7.3 go 1.20.3 app fixed in 1.20.4, 1.19.9 /csi-provisioner
CVE-2023-24540 C 9.8 go 1.20.3 app fixed in 1.20.4, 1.19.9 /csi-provisioner
CVE-2023-39533 H 7.5 go 1.20.3 app fixed in 1.20.7, 1.19.12 /csi-provisioner
CVE-2023-29402 C 9.8 go 1.20.3 app fixed in 1.20.5, 1.19.10 /csi-provisioner
CVE-2023-29403 H 7.8 go 1.20.3 app fixed in 1.20.5, 1.19.10 /csi-provisioner
CVE-2023-29405 C 9.8 go 1.20.3 app fixed in 1.20.5, 1.19.10 /csi-provisioner
CVE-2023-29400 H 7.3 go 1.20.3 app fixed in 1.20.4, 1.19.9 /csi-provisioner
What you expected to happen:
No Critical and High CVE
How to reproduce it:
Twistlock scan
Anything else we need to know?:
Environment:
- Driver version:
- Kubernetes version (use
kubectl version
): - OS (e.g. from /etc/os-release):
- Kernel (e.g.
uname -a
): - Install tools:
- Others:
We as well are using Twistlock and Anchore for our scan tools and are using sig-storage/csi-provisioner:3.6.1 (tested on 20231019) and have multiple findings for go-1.20.5
1 critical and a few mediums all being resolved with 1.20.9 (1.20.10 being latest release for 1.20.x)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.