Benchmark plugin reports 0 skipped but summary show 1 skipped
sgandon opened this issue · 4 comments
I have executed the benchmark tool (kubectl-mtb) on one tenant namespace of our cluster and the output says
13 Passed | 3 Failed | 0 Skipped | 1 Errors |
but the summary below shows 1 skipped
| 18 | MTB-PL1-CC-TI-2 | Block access to other tenant resources | Skipped |
here is the full tail of the output
Completed in 889.772985ms
----------------------------------
13 Passed | 3 Failed | 0 Skipped | 1 Errors |
Completed in 20.665293895s
===================================
+-----+------------------+----------------------------------------+---------+
| NO | ID | TEST | RESULT |
+-----+------------------+----------------------------------------+---------+
| 1 | MTB-PL1-CC-CPI-1 | Block access to cluster resources | Passed |
| 2 | MTB-PL1-BC-CPI-2 | Block Multitenant Resources | Error |
| 3 | MTB-PL1-BC-CPI-3 | Block add capabilities | Passed |
| 4 | MTB-PL1-BC-CPI-4 | Require run as non-root user | Passed |
| 5 | MTB-PL1-BC-CPI-5 | Block privileged containers | Passed |
| 6 | MTB-PL1-BC-CPI-6 | Block privilege escalation | Passed |
| 7 | MTB-PL1-CC-DI-1 | Require always imagePullPolicy | Passed |
| 8 | MTB-PL1-CC-FNS-1 | Configure namespace resource quotas | Failed |
| 9 | MTB-PL1-CC-FNS-2 | Configure namespace object limits | Failed |
| 10 | MTB-PL1-BC-HI-1 | Block use of host path volumes | Passed |
| 11 | MTB-PL1-BC-HI-1 | Block use of NodePort services | Failed |
| 12 | MTB-PL1-BC-HI-3 | Block use of host networking and ports | Passed |
| 13 | MTB-PL1-BC-HI-4 | Block use of host PID | Passed |
| 14 | MTB-PL1-BC-HI-5 | Block use of host IPC | Passed |
| 15 | MTB-PL1-CC-TI-1 | Block modification of resource quotas | Passed |
| 16 | MTB-PL2-BC-OPS-3 | Create Role Bindings | Passed |
| 17 | MTB-PL2-BC-OPS-4 | Create Network Policies | Passed |
| 18 | MTB-PL1-CC-TI-2 | Block access to other tenant resources | Skipped |
+-----+------------------+----------------------------------------+---------+
So there is some issue in the reporting (you can see the count does not add up) and more over the last test is of interest to me and I don't know why it was skipped.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.