kubectl not being populated with certificate-authority
jervine opened this issue · 6 comments
I have deployed openunison via the helm chart to a kubespray provisioned cluster (i.e. a non-managed cluster). I'm not using impersonation, and when I try and generate a kubeconfig file either manually or using the oulogin krew plugin, the certificate-authority data is not being populated.
The K8s API server endpoint uses the default (self-signed) certificates, and I can see that the templates will populate a temporary file with the certificate data according to this line:
export TMP_CERT=\$(mktemp) && echo -e "k8s_newline_cert$" > \$TMP_CERT
Where is k8s_newline_cert sourced from? This should (obviously) be the contents of the CA.crt file (stored on the k8s master node as /etc/kubernetes/pki/ca.crt) however I can't see where in the helm chart this is set Should the portal simply be picking this up 'automatically'?
Apologies if this is a simple question or been answered before.
Sorry for the delayed response. https://openunison.github.io/knowledgebase/certificates/ covers how certificates are sourced. if you're using a single node, you can let OpenUnison pickup the cert automatically, but if you have a load balancer for your API control plane nodes, you'll want to trust that by adding it to the trusted_certs
section with the name k8s-master
. You can customize what the cert's alias is by adding K8S_API_SERVER_CERT
to openunison.non_secret_data
in your values.yaml
Hello,
I think I have the same issue. My cluster has a single node, and yet the generated kubectl command from the login portal starts like this:
export TMP_CERT=$(mktemp) && echo -e "" > $TMP_CERT && ...
- does not read cert at all, and the created kubeconfig does not have
certificate-authority
orcertificate-authority-data
.
I did find the following line from the orchestra pod logs:
context [anonymous] 1:41 attribute k8s_newline_cert isn't defined
.
I did recreate ou-tls-certificate
and replaced it as per the linked documentation, and also checked the definitions of all services to make sure all volumes, configmaps and secrets are applied appropriately. Any ideas/help will be very much welcome.
@dss-boris-petrov can you post your yaml from kubectl get application token -n openunison
? Also your values.yaml.
kubectl get application token -n openunison
NAME AGE
token 295d
kubectl describe application token -n openunison
Name: token
Namespace: openunison
Labels: app.kubernetes.io/component=openunison-applications
app.kubernetes.io/instance=openunison-orchestra-login-portal
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=openunison
app.kubernetes.io/part-of=openunison
Annotations: argocd.argoproj.io/sync-wave: 30
meta.helm.sh/release-name: orchestra-login-portal
meta.helm.sh/release-namespace: openunison
API Version: openunison.tremolo.io/v2
Kind: Application
Metadata:
Creation Timestamp: 2023-07-31T11:56:01Z
Generation: 1
Resource Version: 5104
Spec:
Az Timeout Millis: 3000
Cookie Config:
Cookies Enabled: true
Domain: #[OU_HOST]
Http Only: true
Key Alias: session-unison
Logout URI: /logout
Scope: -1
Secure: true
Session Cookie Name: tremolosession
Timeout: 900
Is App: true
Urls:
Auth Chain: login-service
Az Rules:
Constraint: o=Tremolo
Scope: dn
Filter Chain:
Class Name: com.tremolosecurity.proxy.filters.XForward
Params:
Create Headers: false
Class Name: com.tremolosecurity.proxy.filters.SetNoCacheHeaders
Params:
Class Name: com.tremolosecurity.proxy.filters.MapUriRoot
Params:
New Root: /token
Param Name: tokenURI
Hosts:
#[OU_HOST]
Proxy To: http://ouhtml-orchestra-login-portal.openunison.svc:8080${tokenURI}
Results:
Au Fail: default-login-failure
Az Fail: default-login-failure
Uri: /k8stoken
Auth Chain: login-service
Az Rules:
Constraint: o=Tremolo
Scope: dn
Filter Chain:
Class Name: com.tremolosecurity.scalejs.token.ws.ScaleToken
Params:
Display Name Attribute: sub
frontPage.text: Use this kubectl command to set your user in .kubectl/config. Refresh this screen to generate a new set of tokens. Logging out will clear all of your sessions.
frontPage.title: Kubernetes kubectl command
Home URL: /scale/
k8sCaCertName: #[K8S_API_SERVER_CERT:k8s-master]
Kubectl Template: export TMP_CERT=\$(mktemp) && echo -e "$k8s_newline_cert$" > \$TMP_CERT && kubectl config set-cluster #[K8S_CLUSTER_NAME:kubernetes] --server=#[K8S_URL] --certificate-authority=\$TMP_CERT --embed-certs=true && kubectl config set-context #[K8S_CLUSTER_NAME:kubernetes] --cluster=#[K8S_CLUSTER_NAME:kubernetes] --user=$user_id$@#[K8S_CLUSTER_NAME:kubernetes] && kubectl config set-credentials $user_id$@#[K8S_CLUSTER_NAME:kubernetes] --auth-provider=oidc --auth-provider-arg=client-secret= --auth-provider-arg=idp-issuer-url=$token.claims.issuer$ --auth-provider-arg=client-id=$token.trustName$ --auth-provider-arg=refresh-token=$token.refreshToken$ --auth-provider-arg=id-token=$token.encodedIdJSON$ --auth-provider-arg=idp-certificate-authority-data=#[IDP_CERT_DATA:$ou_b64_cert$] && kubectl config use-context #[K8S_CLUSTER_NAME:kubernetes] && rm \$TMP_CERT
Kubectl Usage: Run the kubectl command to set your user-context and server connection
Kubectl Win Usage: \$TMP_CERT=New-TemporaryFile ; "$k8s_newline_cert_win$" | out-file \$TMP_CERT -encoding oem ; kubectl config set-cluster #[K8S_CLUSTER_NAME:kubernetes] --server=#[K8S_URL] --certificate-authority=\$TMP_CERT --embed-certs=true ; kubectl config set-context #[K8S_CLUSTER_NAME:kubernetes] --cluster=#[K8S_CLUSTER_NAME:kubernetes] --user=$user_id$@#[K8S_CLUSTER_NAME:kubernetes] ; kubectl config set-credentials $user_id$@#[K8S_CLUSTER_NAME:kubernetes] --auth-provider=oidc --auth-provider-arg=client-secret= --auth-provider-arg=idp-issuer-url=$token.claims.issuer$ --auth-provider-arg=client-id=$token.trustName$ --auth-provider-arg=refresh-token=$token.refreshToken$ --auth-provider-arg=id-token=$token.encodedIdJSON$ --auth-provider-arg=idp-certificate-authority-data=$ou_b64_cert$ ; kubectl config use-context #[K8S_CLUSTER_NAME:kubernetes] ; Remove-Item -recurse -force \$TMP_CERT
Logout URL: /logout
Oulogin: kubectl oulogin --host=#[OU_HOST]
Token Class Name: com.tremolosecurity.scalejs.KubectlTokenLoader
UID Attribute Name: uid
Unison Ca Cert Name: unison-ca
Warn Minutes Left: 5
Hosts:
#[OU_HOST]
Results:
Au Fail: default-login-failure
Az Fail: default-login-failure
Uri: /k8stoken/token
Events: <none>
and my values.yaml as follows:
network:
openunison_host: "xxx"
dashboard_host: "xxx"
api_server_host: "xxx"
session_inactivity_timeout_seconds: 900
k8s_url: https://xxx
force_redirect_to_tls: false
createIngressCertificate: true
ingress_type: nginx
ingress_annotations: {}
cert_template:
ou: "Kubernete"
o: "MyOr"
l: "My Cluste"
st: "State of Cluste"
c: "MyCountr"
image: docker.io/tremolosecurity/openunison-k8s
myvd_config_path: "WEB-INF/myvd.conf"
k8s_cluster_name: openunison-cp
enable_impersonation: false
impersonation:
use_jetstack: true
jetstack_oidc_proxy_image: docker.io/tremolosecurity/kube-oidc-proxy:latest
explicit_certificate_trust: true
dashboard:
namespace: "kubernetes-dashboard"
cert_name: "kubernetes-dashboard-certs"
label: "k8s-app=kubernetes-dashboard"
service_name: kubernetes-dashboard
require_session: true
certs:
use_k8s_cm: false
trusted_certs: []
monitoring:
prometheus_service_account: system:serviceaccount:monitoring:prometheus-k8s
network_policies:
enabled: false
ingress:
enabled: true
labels:
app.kubernetes.io/name: ingress-nginx
monitoring:
enabled: true
labels:
app.kubernetes.io/name: monitoring
apiserver:
enabled: false
labels:
app.kubernetes.io/name: kube-system
services:
enable_tokenrequest: false
token_request_audience: api
token_request_expiration_seconds: 600
node_selectors: []
openunison:
include_auth_chain: azuread-load-groups
replicas: 1
non_secret_data:
K8S_DB_SSO: oidc
PROMETHEUS_SERVICE_ACCOUNT: system:serviceaccount:monitoring:prometheus-k8s
SHOW_PORTAL_ORGS: "true"
secrets: []
html:
image: docker.io/tremolosecurity/openunison-k8s-html
enable_provisioning: false
use_standard_jit_workflow: true
*there's oidc configs that I cannot share bellow those lines, but I believe they are irrelevant for the issue.
From what I gather, the variables $k8s_newline_cert and $k8s_newline_cert_win are never set? Also everything was working fine for a long time, before this problem appeared (no idea from what).
When I recreated the ou-tls-certificate
secret, I simply deleted it and upgraded helm which recreated it (therefore the createIngressCertificate: true
.
What version of OpenUnison are you running?
if you add
trusted_certs:
- name: k8s-master
pem_b64: base64 encoded cert from your API server
to your values.yaml and redeploy?
Confirmed this is a bug in the latest openunison. Workaround is to either downgrade to 1.0.39 or add the cert to the list of trusted_certs with the name k8s-master