Allow other ConfigMap names than "coredns"
saffronjam opened this issue · 7 comments
Hi!
When deploying CoreDNS using a RKE2 in Rancher, they name the CoreDNS file "rke2-coredns". But since it is hardcoded here as:
cm = @k8s.api.resource("configmaps", namespace: "kube-system").get("coredns")
... make me unable to use this for my CoreDNS instance.
Perhaps the name of the ConfigMap should be an option that is just defaulted to "coredns", but could be overridden?
I have forked and modified for the same usecase. https://github.com/saithejareddy/hairpin-proxy
Docker Image: saitheja1926/hairpin-proxy-controller:v0.1
Please use below images and modify your hairpin-proxy-controller deploy object by adding below ENV variables:
COREDNS_CONFIGMAP_NAME = # custom config map name.
COREDNS_IMPORT_CONFIG = true # To enable coredns import config feature
Both were needed in case your workloads are running in Linodes k8s.
Thanks for taking the initiative @saithejareddy! I also needed this for Kubernetes on Linode.
I can't run untrusted Docker images in production and your code on GitHub was throwing a bunch of errors when I built and ran it. I fixed it. Here's a version of your fork that works for me: https://github.com/sdudley/hairpin-proxy
When using this, don't forget to update the Role for hairpin-proxy-controller-r
to accommodate the new resourceNames
(see my changes to deploy.yaml), and also to create a ConfigMap
in the kube-system
namespace with an empty string called hairpin-proxy.include
, like this:
data:
hairpin-proxy.include: ""
Then, in the deployment for the controller:
kind: Deployment
metadata:
name: hairpin-proxy-controller
namespace: hairpin-proxy
spec:
template:
spec:
containers:
- image: # your-image-here ...
name: main
env:
- name: COREDNS_CONFIGMAP_NAME
value: "coredns-custom"
- name: COREDNS_IMPORT_CONFIG
value: "true"
Thank you @sdudley for fixing the errors. Can you please raise a pull request? I will merge the changes and raise a pull request with compumike/hairpin-proxy repository.
Observation: this requires the manual creation of an empty coredns-custom
.
If the configmap doesn't exist maybe it can be created, initially empty, and then proceed to add the hairpin-proxy.include
entry.
This would also require to change the kube-system role to add the create
verb.
Hi @saithejareddy - This project does not look like it is being actively maintained by its author, so it seems like raising a PR would be a waste of time...but people are welcome to do whatever they want with my fork. I have been using it for a few days now and it appears stable.
@dvarrazzo, yes, I should have specified in my comment above that you need to "create a ConfigMap called coredns-custom in the kube-system namespace", or more precisely, this:
apiVersion: v1
data:
hairpin-proxy.include: ""
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
I like working with Python about as much as I like swimming with alligators. Since I am not moving to Florida any time soon, I will leave it up to someone else if they want to take the initiative to add features like auto-creating the configmap.
I like working with Python about as much as I like swimming with alligators
So you must love this Ruby project 😄
FYI yesterday I tested @saithejareddy branch with @sdudley last commit and it worked very well, after creating a configmap exactly like the one you suggest (you are right, there needs to be at least an empty entry for hairpin-proxy.include
, or the code will crash).
Great work everyone!
I needed a fully integrated branch to incorporate into some automated deployment, so I put my own fork together: https://github.com/JarvusInnovations/hairpin-proxy
It has the needed ConfigMap built into deploy.yml
and has a new v0.3.0
container image version tagged that is published publicly to ghcr.io via GitHub Action and incorporated into the deployment