Add support for custom envFrom secretRef's and configMapRef's
Raboo opened this issue · 12 comments
Hi,
Would it be possible to add custom envFrom
support in the future?
Background, I am running rook-ceph (operator for running distributed storage ceph).
It has a ObjectBucketClaim that can create s3 buckets. The OBC in turn creates a ConfigMap and a Secret that contains following keys BUCKET_REGION, BUCKET_HOST, BUCKET_PORT, BUCKET_NAME, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. In order for me to use these with this helm chart I need to first apply the OBC. Then open the configmap and secret and copy the values from the keys and insert them for this helm chart values and then install the helm chart. Makes it impossible for me to automate.
If you could make it so I can add additional envFrom[].secretRef
and envFrom[].configMapRef
into the values that are applied to the pod container specs. Then I could easily use the above mentioned auto-generated keys as values like so:
secrets.s3.accessKey=$(AWS_ACCESS_KEY_ID)
secrets.s3.secretKey=$(AWS_SECRET_ACCESS_KEY)
s3.region=$(BUCKET_REGION)
s3.regionEndpoint=$(BUCKET_HOST)
s3.bucket=$(BUCKET_NAME)
Thanks!
We have the same use case, for testing purposes I have used:
storage: s3
s3:
regionEndpoint: http://rook-ceph-rgw-ceph-objectstore.rook-ceph.svc
bucket: ceph-bkt-232cf1c2-7a22-4bf9-9a4a-6a1a0b1ae0fc
region: us-east-1
secrets:
s3: {}
extraEnvVars:
- name: REGISTRY_STORAGE_S3_ACCESSKEY
valueFrom:
secretKeyRef:
name: ceph-bucket
key: AWS_ACCESS_KEY_ID
- name: REGISTRY_STORAGE_S3_SECRETKEY
valueFrom:
secretKeyRef:
name: ceph-bucket
key: AWS_SECRET_ACCESS_KEY
Oh, I must have missed that there was a extraEnvVars. I will try to use that to see if it works.
It was added after you created the issue (#35) and a new release hasn't been released (yet).
@canterberry can you push v1.14.0 to the helm repo so I can test pr #35?
Pushed v1.14.0 about 10h ago!
$ curl -s https://helm.twun.io/index.yaml|grep version
version: 1.13.2
version: 1.13.1
version: 1.13.0
version: 1.12.0
version: 1.11.0
version: 1.10.1
version: 1.10.0
version: 1.9.7
version: 1.9.6
version: 0.0.5
version: 0.0.4
version: 0.0.3
version: 0.0.2
version: 0.0.1
For some reason the variable interpolation is not working for some reason, in normal cases it should work like this:
$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox:latest
env:
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: metadata.name # or "status.hostIP" or "spec.nodeName" etc
- name: INTERPOLATION
value: abc-$(HOSTNAME)
command: ['sh', '-c', 'sleep 3600']
restartPolicy: Never
$ kubectl apply -f pod.yaml
pod/busybox created
$ kubectl exec -it busybox -- sh -c 'echo $INTERPOLATION'
abc-busybox
$ kubectl delete pod busybox
pod "busybox" deleted
For the registry pods it simply looks like this
$ kubectl exec -it registry-c46cf8b8c-6m54d -- sh -c 'echo $REGISTRY_STORAGE_S3_SECRETKEY'
$(AWS_SECRET_ACCESS_KEY)
I expected the output to be my actual s3 secret key.
Any idea on what is wrong?
So the S3 integration should definitely support other S3 provider than AWS. I'm having issues with Minio:
2b6b3509ddbc: Pushing [==================================================>] 44.08MB 791274611ac9: Layer already exists dc365698f1f2: Layer already exists 636159d907db: Pushing [==================================================>] 8.871MB 485115e86f04: Layer already exists 30f3545290f2: Retrying in 5 seconds 7fcd2600f5ad: Pushing [=======================> ] 8.582MB/18.47MB 8f56c3340629: Pushing [====================> ] 216MB/528.4MB ba6e5ff31f23: Pushing [============================================> ] 135.3MB/151.9MB 9f9f651e9303: Waiting 0b3c02b5d746: Waiting 62a747bf1719: Waiting received unexpected HTTP status: 500 Internal Server Error
Using other S3 than AWS works. The part that doesn't work for me is to reuse the already existing secret keys I got.
I got it running with Minio, but layers are partly failing with:
2b6b3509ddbc: Pushing [==================================================>] 44.08MB
791274611ac9: Pushed
dc365698f1f2: Pushed
636159d907db: Pushing [==================================================>] 8.871MB
485115e86f04: Pushed
30f3545290f2: Pushing [==================================================>] 56.94MB
7fcd2600f5ad: Pushing [==================================================>] 19.04MB
8f56c3340629: Pushing [====================================> ] 388.9MB/528.4MB
ba6e5ff31f23: Pushing [====================> ] 63.28MB/151.9MB
9f9f651e9303: Retrying in 11 seconds
0b3c02b5d746: Retrying in 5 seconds
62a747bf1719: Waiting
received unexpected HTTP status: 500 Internal Server Error
From the pod logs:
0.0.0.182 - - [17/Dec/2021:06:30:58 +0000] "PUT /v2/koeniz-abfuhr-api/blobs/uploads/12112417-2077-427e-8887-826838ac5fd1?_state=QOBlpwsE2IUvBu9TVHxBTXT-ryPrSju7o75LX_rcjed7Ik5hbWUiOiJrb2VuaXotYWJmdWhyLWFwaSIsIlVVSUQiOiIxMjExMjQxNy0yMDc3LTQyN2UtODg4Ny04MjY4MzhhYzVmZDEiLCJPZmZzZXQiOjUxNTMyNzMsIlN0YXJ0ZWRBdCI6IjIwMjEtMTItMTdUMDY6MzA6NTNaIn0%3D&digest=sha256%3Af02b617c6a8c415a175f44d7e2c5d3b521059f2a6112c5f022e005a44a759f2d HTTP/1.1" 404 76 "" "docker/20.10.7 go/go1.13.8 git-commit/20.10.7-0ubuntu5~20.04.2 kernel/5.4.0-88-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.7 \\(linux\\))"
time="2021-12-17T06:30:58.727738131Z" level=error msg="unknown error reading request payload: SerializationError: failed to decode S3 XML error response
status code: 413, request id: , host id:
caused by: expected element type <Error> but have <html>" go.version=go1.11.2 http.request.host=registry.damn.li http.request.id=31399fb0-7bae-46e0-9276-72de72cf654d http.request.method=PATCH http.request.remoteaddr=10.0.1.238 http.request.uri="/v2/koeniz-abfuhr-api/blobs/uploads/472baa1b-c4ab-4d7e-8e0b-279be2bda20f?_state=tMWWE48rCSrU4cMHWyT3uB-3KFyY_tuvf-Ku2Qv0efN7Ik5hbWUiOiJrb2VuaXotYWJmdWhyLWFwaSIsIlVVSUQiOiI0NzJiYWExYi1jNGFiLTRkN2UtOGUwYi0yNzliZTJiZGEyMGYiLCJPZmZzZXQiOjAsIlN0YXJ0ZWRBdCI6IjIwMjEtMTItMTdUMDY6MzA6NDQuMzcwNDg5OTQ4WiJ9" http.request.useragent="docker/20.10.7 go/go1.13.8 git-commit/20.10.7-0ubuntu5~20.04.2 kernel/5.4.0-88-generic os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.7 \(linux\))" vars.name=koeniz-abfuhr-api vars.uuid=472baa1b-c4ab-4d7e-8e0b-279be2bda20f
time="2021-12-17T06:30:58.793678188Z" level=error msg="response completed with error" err.code=unknown err.detail="SerializationError: failed to decode S3 XML error response
status code: 413, request id: , host id: