openshift/boilerplate

onboard operator-sdk >= 1

georgettica opened this issue · 12 comments

two Issues I found so far:

1

sed: can't read build/Dockerfile: No such file or directory

the file is saved the the /Dockerfile location.. solved by

mkdir build
ln -s ../Dockerfile build/Dockerfile

2

on macos

sed: 1: "build/Dockerfile": undefined label 'uild/Dockerfile'

no solution found so far

  1. When running what? In our conversation earlier, you hinted that osdk somehow generates the Dockerfile in the repo root? Which command does that? On what does it base the contents? Is there some other osdk command that uses the Dockerfile, and expects it to be in that location? All of these are leading questions to decide whether we want boilerplate to expect the file to be somewhere else, or if consumers at osdk>=1 should do something different to generate/create it.

  2. Mac uses BSD tools by default. There will be a time, hopefully soon, when these make targets will run in a container -- the same container used by prow -- which would eliminate these discrepancies. Until then, please install and use gnu sed (and other utils) if you want to run locally on a Mac.

  1. when running make
  2. I will wait patiently, but gnu sed is saved as gsed, which doesnt help

the Dockerfile is created from the command operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator
and the contents look like:

$ find
find -type f
./Makefile
./bin/manager
./Dockerfile
./PROJECT
./go.mod
./.gitignore
./go.sum
./main.go
./hack/boilerplate.go.txt
./config/certmanager/kustomizeconfig.yaml
./config/certmanager/kustomization.yaml
./config/certmanager/certificate.yaml
./config/default/manager_auth_proxy_patch.yaml
./config/default/manager_webhook_patch.yaml
./config/default/webhookcainjection_patch.yaml
./config/default/kustomization.yaml
./config/prometheus/monitor.yaml
./config/prometheus/kustomization.yaml
./config/scorecard/kustomization.yaml
./config/scorecard/patches/olm.config.yaml
./config/scorecard/patches/basic.config.yaml
./config/scorecard/bases/config.yaml
./config/rbac/leader_election_role_binding.yaml
./config/rbac/auth_proxy_client_clusterrole.yaml
./config/rbac/role_binding.yaml
./config/rbac/auth_proxy_service.yaml
./config/rbac/auth_proxy_role_binding.yaml
./config/rbac/leader_election_role.yaml
./config/rbac/kustomization.yaml
./config/rbac/auth_proxy_role.yaml
./config/manager/manager.yaml
./config/manager/kustomization.yaml
./config/webhook/kustomizeconfig.yaml
./config/webhook/service.yaml
./config/webhook/kustomization.yaml
cat Dockerfile
# Build the manager binary
FROM golang:1.13 as builder

WORKDIR /workspace
# Copy the Go Modules manifests
COPY go.mod go.mod
COPY go.sum go.sum
# cache deps before building and copying source so that we don't need to re-download as much
# and so that source changes don't invalidate our downloaded layer
RUN go mod download

# Copy the go source
COPY main.go main.go
COPY api/ api/
COPY controllers/ controllers/

# Build
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GO111MODULE=on go build -a -o manager main.go

# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
FROM gcr.io/distroless/static:nonroot
WORKDIR /
COPY --from=builder /workspace/manager .
USER nonroot:nonroot

ENTRYPOINT ["/manager"]
$ operator-sdk version
operator-sdk version: "v1.2.0", commit: "215fc50b2d4acc7d92b36828f42d7d1ae212015c", kubernetes version: "v1.18.8", go version: "go1.15.3", GOOS: "linux", GOARCH: "amd64"
  1. Okay. It'll make sense for bp to expect it in the repo root when we get to v1. (BTW, Today Eric is thinking we won't be trying to support v1, or onboard RMO, until mid-January at the earliest.)
  2. Since #103 you can run ./boilerplate/_lib/container-make {target}.

Woohoo! Thanks for 2.
And about 1 thanks for telling me! I will work on the copy 🍝 mess tomorrow

  1. Why don't you use the OPERATOR_DOCKERFILE variable
  2. gsed has a sed link. I add it to my profile:
$ ls -l /usr/local/opt/gnu-sed/libexec/gnubin
total 0
lrwxr-xr-x 1 rporresm staff 14 Jan 15  2020 sed -> ../../bin/gsed

regarding that, I was a noob and put and did

export PATH=${PATH}:/usr/local/opt/gnu-sed/libexec/gnubin

which now I know did the problem

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

/remove-lifecycle stale

should this be closed @2uasimojo @rporres ?

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

Since #176 we've greatly reduced our reliance on operator-sdk, and our tie-in to specific versions of it. IMHO given the amount of trouble we've had coordinating across even a small range of 0.1x versions, it would be nice to continue down that path. So wherever there's an opportunity to replace an osdk command with $something_else (like controller-gen, as in #176) we should do that.

The thing that comes to mind where v1 could add real value would be generating CSVs and OLM bundles, which we currently do with our own home-grown scripting. But I think the problem is that the osdkv1-based ways of doing that depend on the osdkv1-ish directory structure, which is wildly different from v0.x.

In conclusion, I don't know if it makes sense to hold this issue open or not. But if so, its purpose has certainly changed from what it was originally.

If it has changed, let's close and maybe reopen if necessary.

We can track in a different issue, thought I am not sure I will drive testing in from my side of the fence 🤷‍♂️

We’re kinda tracking things more in jira anyway.