jp-gouin/helm-openldap

slapd not starting properly

cbwilliamsnh opened this issue · 18 comments

I know it's a configuration issue on my part, but I haven't been able to figure it out.

When I deploy the chart, the openldap service is throwing the following error:

read_config: no serverID / URL match found. Check slapd -h arguments.

I'm unable to create a shell into the pod to run the command and see what's going on. What am I missing?

Hi @cbwilliamsnh ,
Are you deploying the chart into a clean namespace ? If you tried deploying the solution already then you can have orphan volume reattached to the statefullset

I am now.

kubectl create namespace openldap
helm install openldap -n openldap -f values.yaml helm-openldap/openldap-stack-ha

and now seeing response 404 (backend NotFound), service rules for the path non-existent when I try to bring up the admin console.

Can you connect to one openldap pod and execute and post the result?


LDAPTLS_REQCERT=never ldapsearch -x -D 'cn=admin,dc=example,dc=org' -w Not@SecurePassw0rd -H ldaps://localhost:1636 -b 'dc=example,dc=org'

Also are you using an ingress controller to expose phpldapadmin ?

I'm using the ingress controller created by the chart. Trying to find where I need to run a shell to run the command above.

The chart doesn't come with an ingress controller
It exposes Ingresses but require an ingress controller to do the exposition

Then why when I look at the ingresses in the cluster, there's one listed for openldap-phpldapadmin?

Running this command and not getting connected to the LDAP server (-1)

LDAPTLS_REQCERT=never ldapsearch -x -D 'cn=admin,dc=unifycx,dc=unifyco,dc=ai' -w Ju$t4me! -H ldaps://localhost:1636 -b 'dc=unifycx,dc=unifyco,dc=ai'

And in prior attempts, I was able to get the admin console in my browser... so a bit confused (also somewhat new to GKE and Helm)

going to try and deploy the chart without my own values.yaml file into a new clean namespace. Will get back to you tomorrow.

@jp-gouin I'm missing something very basic. All I'm trying to do is install the chart in the unifycx.unifyco.ai domain and be able to get to the phpldapadmin UI and have the API of the LDAP server available to the rest of the cluster.

When I first did the install (into an empty namespace), I simply changed example.org to unifycx.unifyco.ai and installed the chart with my values file. What else do I need to do after that to get this working?

Here's the results of the command you asked me to run:

# extended LDIF
#
# LDAPv3
# base <dc=example,dc=org> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#

# example.org
dn: dc=example,dc=org
objectClass: dcObject
objectClass: organization
dc: example
o: example

# users, example.org
dn: ou=users,dc=example,dc=org
objectClass: organizationalUnit
ou: users

# user01, users, example.org
dn: cn=user01,ou=users,dc=example,dc=org
cn: User1
cn: user01
sn: Bar1
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
userPassword:: Yml0bmFtaTE=
uid: user01
uidNumber: 1000
gidNumber: 1000
homeDirectory: /home/user01

# user02, users, example.org
dn: cn=user02,ou=users,dc=example,dc=org
cn: User2
cn: user02
sn: Bar2
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
userPassword:: Yml0bmFtaTI=
uid: user02
uidNumber: 1001
gidNumber: 1001
homeDirectory: /home/user02

# readers, users, example.org
dn: cn=readers,ou=users,dc=example,dc=org
cn: readers
objectClass: groupOfNames
member: cn=user01,ou=users,dc=example,dc=org
member: cn=user02,ou=users,dc=example,dc=org

# search result
search: 2
result: 0 Success

# numResponses: 6
# numEntries: 5```

That the expected output with the default global.ldapDomain
Can you post your values ?


# Default values for openldap.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

# Global Docker image parameters
# Please, note that this will override the image parameters, including dependencies, configured to use the global value
# Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass
global:
  imageRegistry: ''
  ## E.g.
  ## imagePullSecrets:
  ##   - myRegistryKeySecretName
  ##
  #imagePullSecrets: [""]
  ## ldapDomain , can be explicit (e.g dc=toto,c=ca) or domain based (e.g unifycx.unifyco.ai.com)
  ldapDomain: 'unifycx.unifyco.ai'
  # Specifies an existing secret to be used for admin and config user passwords. The expected key are LDAP_ADMIN_PASSWORD and LDAP_CONFIG_ADMIN_PASSWORD.
  # existingSecret: ""
  ## Default Passwords to use, stored as a secret. Not used if existingSecret is set.
  adminPassword: Ju$t4me!
  configPassword: Ju$t4me!
  ldapPort: 389
  sslLdapPort: 636

## @section Common parameters

## @param kubeVersion Override Kubernetes version
##
kubeVersion: ''
## @param nameOverride String to partially override common.names.fullname
##
nameOverride: ''
## @param fullnameOverride String to fully override common.names.fullname
##
fullnameOverride: ''
## @param commonLabels Labels to add to all deployed objects
##
commonLabels: {}
## @param commonAnnotations Annotations to add to all deployed objects
##
commonAnnotations: {}
## @param clusterDomain Kubernetes cluster domain name
##
clusterDomain: cluster.local
## @param extraDeploy Array of extra objects to deploy with the release
##
extraDeploy: []

replicaCount: 3

image:
  # From repository https://hub.docker.com/r/bitnami/openldap/
  repository: bitnami/openldap
  tag: 2.6.3
  pullPolicy: Always
  pullSecrets: []

# Set the container log level
# Valid log levels: none, error, warning, info (default), debug, trace
logLevel: info

# Settings for enabling TLS with custom certificate
# need a secret with tls.crt, tls.key and ca.crt keys with associated files
# Ref: https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/#create-a-secret
customTLS:
  enabled: false
  image:
    repository: alpine/openssl
    tag: latest
  secret: '' # The name of a kubernetes.io/tls type secret to use for TLS
## Add additional labels to all resources
extraLabels: {}

service:
  annotations: {}
  ## If service type NodePort, define the value here
  #ldapPortNodePort:
  #sslLdapPortNodePort:
  ## List of IP addresses at which the service is available
  ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
  ##
  externalIPs: []

  #loadBalancerIP:
  #loadBalancerSourceRanges: []
  type: ClusterIP
  sessionAffinity: None

# Default configuration for openldap as environment variables. These get injected directly in the container.
# Use the env variables from https://hub.docker.com/r/bitnami/openldap/
# Be careful, do not modify the following values unless you know exactly what your are doing
env:
  BITNAMI_DEBUG: 'true'
  LDAP_LOGLEVEL: '256'
  LDAP_TLS_ENFORCE: 'false'
  LDAPTLS_REQCERT: 'never'
  LDAP_ENABLE_TLS: 'no'
  LDAP_CONFIG_ADMIN_ENABLED: 'yes'
  LDAP_CONFIG_ADMIN_USERNAME: 'admin'
  LDAP_SKIP_DEFAULT_TREE: 'no'

# Pod Disruption Budget for Stateful Set
# Disabled by default, to ensure backwards compatibility
pdb:
  enabled: false
  minAvailable: 1
  maxUnavailable: ''

## User list to create (comma separated list) , can't be use with customLdifFiles
## Default set by bitnami image
# users: user01,user02

## User password to create (comma separated list, one for each user)
## Default set by bitnami image
# userPasswords: bitnami1, bitnami2

## Group to create and add list of user above
## Default set by bitnami image
# group: readers

# Custom openldap schema files used to be used in addition to default schemas
# customSchemaFiles:
#   custom.ldif: |-
#     # custom schema
#   anothercustom.ldif: |-
#     # another custom schema

## Existing configmap with custom ldif
# Can't be use with customLdifFiles
# Same format as customLdifFiles
# customLdifCm: my-custom-ldif-cm

# Custom openldap configuration files used to override default settings
# DO NOT FORGET to put the Root Organisation object as it won't be created while using customLdifFiles
# customLdifFiles:
#   00-root.ldif: |-
#     # Root creation
#     dn: dc=unifycx.unifyco.ai,dc=org
#     objectClass: dcObject
#     objectClass: organization
#     o: unifycx.unifyco.ai, Inc
#   01-default-group.ldif: |-
#     dn: cn=myGroup,dc=unifycx.unifyco.ai,dc=org
#     cn: myGroup
#     gidnumber: 500
#     objectclass: posixGroup
#     objectclass: top
#   02-default-user.ldif: |-
#     dn: cn=Jean Dupond,dc=unifycx.unifyco.ai,dc=org
#     cn: Jean Dupond
#     gidnumber: 500
#     givenname: Jean
#     homedirectory: /home/users/jdupond
#     objectclass: inetOrgPerson
#     objectclass: posixAccount
#     objectclass: top
#     sn: Dupond
#     uid: jdupond
#     uidnumber: 1000
#     userpassword: {MD5}KOULhzfBhPTq9k7a9XfCGw==

# Custom openldap ACLs
# If not defined, the following default ACLs are applied:
# customAcls: |-
#   dn: olcDatabase={2}mdb,cn=config
#   changetype: modify
#   replace: olcAccess
#   olcAccess: {0}to *
#     by dn.exact=gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth manage
#     by * break
#   olcAccess: {1}to attrs=userPassword,shadowLastChange
#     by self write
#     by dn="{{ include "global.bindDN" . }}" write
#     by anonymous auth by * none
#   olcAccess: {2}to *
#     by dn="{{ include "global.bindDN" . }}" write
#     by self read
#     by * none

replication:
  enabled: true
  # Enter the name of your cluster, defaults to "cluster.local"
  clusterName: 'cluster.local'
  retry: 60
  timeout: 1
  interval: 00:00:00:10
  starttls: 'no'
  tls_reqcert: 'never'
## Persist data to a persistent volume
persistence:
  enabled: true
  ## database data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "standard-singlewriter"
  # existingClaim: openldap-pvc
  accessModes:
    - ReadWriteOnce
  size: 8Gi
  storageClass: ''

## @param customLivenessProbe Custom livenessProbe that overrides the default one
##
customLivenessProbe: {}
## @param customReadinessProbe Custom readinessProbe that overrides the default one
##
customReadinessProbe: {}
## @param customStartupProbe Custom startupProbe that overrides the default one
##
customStartupProbe: {}
## OPENLDAP  resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
## @param resources.limits The resources limits for the OPENLDAP  containers
## @param resources.requests The requested resources for the OPENLDAP  containers
##
resources:
  limits: {}
  requests: {}
## Configure Pods Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param podSecurityContext.enabled Enabled OPENLDAP  pods' Security Context
## @param podSecurityContext.fsGroup Set OPENLDAP  pod's Security Context fsGroup
##
podSecurityContext:
  enabled: true
  fsGroup: 1001
## Configure Container Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param containerSecurityContext.enabled Enabled OPENLDAP  containers' Security Context
## @param containerSecurityContext.runAsUser Set OPENLDAP  containers' Security Context runAsUser
## @param containerSecurityContext.runAsNonRoot Set OPENLDAP  containers' Security Context runAsNonRoot
##
containerSecurityContext:
  enabled: false
  runAsUser: 1001
  runAsNonRoot: true

## @param existingConfigmap The name of an existing ConfigMap with your custom configuration for OPENLDAP
##
existingConfigmap:
## @param command Override default container command (useful when using custom images)
##
command: []
## @param args Override default container args (useful when using custom images)
##
args: []
## @param hostAliases OPENLDAP  pods host aliases
## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
##
hostAliases: []
## @param podLabels Extra labels for OPENLDAP  pods
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## @param podAnnotations Annotations for OPENLDAP  pods
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## @param podAffinityPreset Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAffinityPreset: ''
## @param podAntiAffinityPreset Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAntiAffinityPreset: soft
## Node affinity preset
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
##
nodeAffinityPreset:
  ## @param nodeAffinityPreset.type Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
  ##
  type: ''
  ## @param nodeAffinityPreset.key Node label key to match. Ignored if `affinity` is set
  ##
  key: ''
  ## @param nodeAffinityPreset.values Node label values to match. Ignored if `affinity` is set
  ## E.g.
  ## values:
  ##   - e2e-az1
  ##   - e2e-az2
  ##
  values: []
## @param affinity Affinity for OPENLDAP  pods assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## NOTE: `podAffinityPreset`, `podAntiAffinityPreset`, and `nodeAffinityPreset` will be ignored when it's set
##
affinity: {}
## @param nodeSelector Node labels for OPENLDAP  pods assignment
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## @param tolerations Tolerations for OPENLDAP  pods assignment
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## @param updateStrategy.type OPENLDAP  statefulset strategy type
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
##
updateStrategy:
  ## StrategyType
  ## Can be set to RollingUpdate or OnDelete
  ##
  type: RollingUpdate
## @param priorityClassName OPENLDAP  pods' priorityClassName
##
priorityClassName: ''
## @param schedulerName Name of the k8s scheduler (other than default) for OPENLDAP  pods
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ''
## @param lifecycleHooks for the OPENLDAP  container(s) to automate configuration before or after startup
##
lifecycleHooks: {}
## @param extraEnvVars Array with extra environment variables to add to OPENLDAP  nodes
## e.g:
## extraEnvVars:
##   - name: FOO
##     value: "bar"
##
extraEnvVars: []
## @param extraEnvVarsCM Name of existing ConfigMap containing extra env vars for OPENLDAP  nodes
##
extraEnvVarsCM:
## @param extraEnvVarsSecret Name of existing Secret containing extra env vars for OPENLDAP  nodes
##
extraEnvVarsSecret:
## @param extraVolumes Optionally specify extra list of additional volumes for the OPENLDAP  pod(s)
##
extraVolumes: []
## @param extraVolumeMounts Optionally specify extra list of additional volumeMounts for the OPENLDAP  container(s)
##
extraVolumeMounts: []
## @param sidecars Add additional sidecar containers to the OPENLDAP  pod(s)
## e.g:
## sidecars:
##   - name: your-image-name
##     image: your-image
##     imagePullPolicy: Always
##     ports:
##       - name: portname
##         containerPort: 1234
##
sidecars: {}
## @param initContainers Add additional init containers to the OPENLDAP  pod(s)
## ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
## e.g:
## initContainers:
##  - name: your-image-name
##    image: your-image
##    imagePullPolicy: Always
##    command: ['sh', '-c', 'echo "hello world"']
##
initContainers: {}
## ServiceAccount configuration
##
serviceAccount:
  ## @param serviceAccount.create Specifies whether a ServiceAccount should be created
  ##
  create: true
  ## @param serviceAccount.name The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the common.names.fullname template
  ##
  name: ''

## @section Init Container Parameters

## 'volumePermissions' init container parameters
## Changes the owner and group of the persistent volume mount point to runAsUser:fsGroup values
##   based on the *podSecurityContext/*containerSecurityContext parameters
##
volumePermissions:
  ## @param volumePermissions.enabled Enable init container that changes the owner/group of the PV mount point to `runAsUser:fsGroup`
  ##
  enabled: false
  ## Bitnami Shell image
  ## ref: https://hub.docker.com/r/bitnami/bitnami-shell/tags/
  ## @param volumePermissions.image.registry Bitnami Shell image registry
  ## @param volumePermissions.image.repository Bitnami Shell image repository
  ## @param volumePermissions.image.tag Bitnami Shell image tag (immutable tags are recommended)
  ## @param volumePermissions.image.pullPolicy Bitnami Shell image pull policy
  ## @param volumePermissions.image.pullSecrets Bitnami Shell image pull secrets
  ##
  image:
    registry: docker.io
    repository: bitnami/bitnami-shell
    tag: 10-debian-10
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ## e.g:
    ## pullSecrets:
    ##   - myRegistryKeySecretName
    ##
    pullSecrets: []
  ## Command to execute during the volumePermission startup
  ## command: ['sh', '-c', 'echo "hello world"']
  command: {}
  ## Init container's resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ## @param volumePermissions.resources.limits The resources limits for the init container
  ## @param volumePermissions.resources.requests The requested resources for the init container
  ##
  resources:
    limits: {}
    requests: {}
  ## Init container Container Security Context
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
  ## @param volumePermissions.containerSecurityContext.runAsUser Set init container's Security Context runAsUser
  ## NOTE: when runAsUser is set to special value "auto", init container will try to chown the
  ##   data folder to auto-determined user&group, using commands: `id -u`:`id -G | cut -d" " -f2`
  ##   "auto" is especially useful for OpenShift which has scc with dynamic user ids (and 0 is not allowed)
  ##
  containerSecurityContext:
    runAsUser: 0

## Configure extra options for liveness, readiness, and startup probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes
livenessProbe:
  enabled: true
  initialDelaySeconds: 20
  periodSeconds: 10
  timeoutSeconds: 1
  successThreshold: 1
  failureThreshold: 10
readinessProbe:
  enabled: true
  initialDelaySeconds: 20
  periodSeconds: 10
  timeoutSeconds: 1
  successThreshold: 1
  failureThreshold: 10
startupProbe:
  enabled: true
  initialDelaySeconds: 0
  periodSeconds: 10
  timeoutSeconds: 1
  successThreshold: 1
  failureThreshold: 30

## test container details
test:
  enabled: false
  image:
    repository: dduportal/bats
    tag: 0.4.0

## ltb-passwd
# For more parameters check following file: ./charts/ltb-passwd/values.yaml
ltb-passwd:
  enabled: true
  image:
    tag: 5.2.3
  ingress:
    enabled: true
    annotations: {}
    # See https://kubernetes.io/docs/concepts/services-networking/ingress/#ingressclass-scope
    # ingressClassName: nginx
    path: /
    pathType: Prefix
    ## Ingress Host
    hosts:
      - 'ssl-ldap2.unifycx.unifyco.ai'
    ## Ingress cert
    tls: []
    # - secretName: ssl-ldap2.unifycx.unifyco.ai
    #   hosts:
    #   - ssl-ldap2.unifycx.unifyco.ai
  # ldap:
  # if you want to restrict search base tree for users instead of complete domain
  # searchBase: "ou=....,dc=mydomain,dc=com"
  # if you want to use a dedicated bindDN for the search with less permissions instead of cn=admin one
  # bindDN: "cn=....,dc=mydomain,dc=com"
  # if you want to use a specific key of the credentials secret instead of the default one (LDAP_ADMIN_PASSWORD)
  # passKey: LDAP_MY_KEY

## phpldapadmin
## For more parameters check following file: ./charts/phpldapadmin/values.yaml
phpldapadmin:
  enabled: true
  image:
    tag: 0.9.0
  env:
    PHPLDAPADMIN_LDAP_CLIENT_TLS_REQCERT: 'never'
  ingress:
    enabled: true
    annotations: {}
    ## See https://kubernetes.io/docs/concepts/services-networking/ingress/#ingressclass-scope
    ingressClassName: nginx
    path: /
    pathType: Prefix
    ## Ingress Host
    hosts:
      - phpldapadmin.unifycx.unifyco.ai
    ## Ingress cert
    tls: []
    # - secretName: phpldapadmin.unifycx.unifyco.ai
    #   hosts:
    #   - phpldapadmin.unifycx.unifyco.ai

With the above values.yaml file, I'm now not able to start the openldap service. This is a major blocker for work I need to get done.

After some test I can't reproduce your issue :
I used the following values :

global:
  imageRegistry: ""
  ## E.g.
  ## imagePullSecrets:
  ##   - myRegistryKeySecretName
  ##
  imagePullSecrets: [""]
  storageClass: ""
  ldapDomain: 'unifycx.unifyco.ai'
  ## Default Passwords to use, stored as a secret. Not used if existingSecret is set.
  adminPassword:  Not@SecurePassw0rd
  configPassword: Not@SecurePassw0rd
  ldapPort: 1389
  sslLdapPort: 1636

And got the following result as expected :

I have no name!@sa-openldap-0:/$ LDAPTLS_REQCERT=never ldapsearch -x -D 'cn=admin,dc=unifycx,dc=unifyco,dc=ai' -w Not@SecurePassw0rd -H ldaps://localhost:1636 -b 'dc=unifycx,dc=unifyco,dc=ai'
# extended LDIF
#
# LDAPv3
# base <dc=unifycx,dc=unifyco,dc=ai> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#

# unifycx.unifyco.ai
dn: dc=unifycx,dc=unifyco,dc=ai
objectClass: dcObject
objectClass: organization
dc: unifycx
o: example

# users, unifycx.unifyco.ai
dn: ou=users,dc=unifycx,dc=unifyco,dc=ai
objectClass: organizationalUnit
ou: users

# user01, users, unifycx.unifyco.ai
dn: cn=user01,ou=users,dc=unifycx,dc=unifyco,dc=ai
cn: User1
cn: user01
sn: Bar1
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
userPassword:: Yml0bmFtaTE=
uid: user01
uidNumber: 1000
gidNumber: 1000
homeDirectory: /home/user01

# user02, users, unifycx.unifyco.ai
dn: cn=user02,ou=users,dc=unifycx,dc=unifyco,dc=ai
cn: User2
cn: user02
sn: Bar2
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
userPassword:: Yml0bmFtaTI=
uid: user02
uidNumber: 1001
gidNumber: 1001
homeDirectory: /home/user02

# readers, users, unifycx.unifyco.ai
dn: cn=readers,ou=users,dc=unifycx,dc=unifyco,dc=ai
cn: readers
objectClass: groupOfNames
member: cn=user01,ou=users,dc=unifycx,dc=unifyco,dc=ai
member: cn=user02,ou=users,dc=unifycx,dc=unifyco,dc=ai

# search result
search: 2
result: 0 Success

# numResponses: 6
# numEntries: 5

@cbwilliamsnh are you able to make it work with the values I provided ?

Sorry for the delayed response. I'm using MS Entra Id instead (given direction).

@jp-gouin
Hey I'm having same problem with this.
Should I do some more proecess after just deploying helm chart?
What i did is exactly same as cbwilliamsnh, changed ldap domain and just deployed.

@kimkihoon0515 please open a new issue and provide your values.yaml file