Issues generating an sbom for a container tagged for AWS ECR on mac m1
strongjz opened this issue · 4 comments
What happened:
using bom to generate an sbom for a container stored in AWS ECR
What you expected to happen:
output an sbom
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
na
Environment:
bom version
______ _____ ___ ___
| ___ \| _ || \/ |
| |_/ /| | | || . . |
| ___ \| | | || |\/| |
| |_/ /\ \_/ /| | | |
\____/ \___/ \_| |_/
bom: A tool for working with SPDX manifests
GitVersion: v0.3.0
GitCommit: unknown
GitTreeState: unknown
BuildDate: unknown
GoVersion: go1.19.1
Compiler: gc
Platform: darwin/arm64
uname -a
Darwin Jamess-MBP-2.localdomain 21.6.0 Darwin Kernel Version 21.6.0: Mon Aug 22 20:19:52 PDT 2022; root:xnu-8020.140.49~2/RELEASE_ARM64_T6000 arm64
Jamess-MBP-2:adobe-images strongjz$
log output
Jamess-MBP-2:adobe-images strongjz$ bom generate -i 123456789012.dkr.ecr.us-east-1.amazonaws.com/cluster-registry-client:ed22c79
INFO bom v0.3.0: Generating SPDX Bill of Materials
INFO Processing image reference: 123456789012.dkr.ecr.us-east-1.amazonaws.com/cluster-registry-client:ed22c79
INFO Adding image tag 123456789012.dkr.ecr.us-east-1.amazonaws.com/cluster-registry-client:ed22c79 from reference
INFO Checking the local image cache for 123456789012.dkr.ecr.us-east-1.amazonaws.com/cluster-registry-client:ed22c79
INFO 123456789012.dkr.ecr.us-east-1.amazonaws.com/cluster-registry-client:ed22c79 was found in the local image cache
panic: runtime error: index out of range [1] with length 1
goroutine 1 [running]:
sigs.k8s.io/bom/pkg/spdx.(*spdxDefaultImplementation).PullImagesToArchive(0x0?, {0x140001b2eb0, 0x4c}, {0x140001b2fa0, 0x45})
/Users/strongjz/Documents/code/go/pkg/mod/sigs.k8s.io/bom@v0.3.0/pkg/spdx/implementation.go:422 +0xce8
sigs.k8s.io/bom/pkg/spdx.(*spdxDefaultImplementation).ImageRefToPackage(0x140001ca000?, {0x140001b2eb0, 0x4c}, 0x1e?)
/Users/strongjz/Documents/code/go/pkg/mod/sigs.k8s.io/bom@v0.3.0/pkg/spdx/implementation.go:735 +0xf8
sigs.k8s.io/bom/pkg/spdx.(*SPDX).ImageRefToPackage(...)
/Users/strongjz/Documents/code/go/pkg/mod/sigs.k8s.io/bom@v0.3.0/pkg/spdx/spdx.go:247
sigs.k8s.io/bom/pkg/spdx.(*defaultDocBuilderImpl).GenerateDoc(0xfa78?, 0x1010871b0, 0x140001c3680)
/Users/strongjz/Documents/code/go/pkg/mod/sigs.k8s.io/bom@v0.3.0/pkg/spdx/builder.go:246 +0x924
sigs.k8s.io/bom/pkg/spdx.(*DocBuilder).Generate(0x140001a5530, 0x140001c3680)
/Users/strongjz/Documents/code/go/pkg/mod/sigs.k8s.io/bom@v0.3.0/pkg/spdx/builder.go:96 +0xbc
sigs.k8s.io/bom/cmd/bom/cmd.generateBOM(0x140001c3560)
/Users/strongjz/Documents/code/go/pkg/mod/sigs.k8s.io/bom@v0.3.0/cmd/bom/cmd/generate.go:341 +0x38c
sigs.k8s.io/bom/cmd/bom/cmd.AddGenerate.func1(0x14000472a00?, {0x14000375fc0?, 0x2?, 0x2?})
/Users/strongjz/Documents/code/go/pkg/mod/sigs.k8s.io/bom@v0.3.0/cmd/bom/cmd/generate.go:146 +0xa4
github.com/spf13/cobra.(*Command).execute(0x14000472a00, {0x14000375fa0, 0x2, 0x2})
/Users/strongjz/Documents/code/go/pkg/mod/github.com/spf13/cobra@v1.5.0/command.go:872 +0x4d0
github.com/spf13/cobra.(*Command).ExecuteC(0x10107e8c0)
/Users/strongjz/Documents/code/go/pkg/mod/github.com/spf13/cobra@v1.5.0/command.go:990 +0x354
github.com/spf13/cobra.(*Command).Execute(...)
/Users/strongjz/Documents/code/go/pkg/mod/github.com/spf13/cobra@v1.5.0/command.go:918
sigs.k8s.io/bom/cmd/bom/cmd.Execute()
/Users/strongjz/Documents/code/go/pkg/mod/sigs.k8s.io/bom@v0.3.0/cmd/bom/cmd/root.go:71 +0x28
main.main()
/Users/strongjz/Documents/code/go/pkg/mod/sigs.k8s.io/bom@v0.3.0/cmd/bom/main.go:24 +0x1c
Jamess-MBP-2:adobe-images strongjz$
- Cloud provider or hardware configuration:
- OS (e.g:
cat /etc/os-release
): - Kernel (e.g.
uname -a
): - Others:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.