jp-gouin/helm-openldap

ldif in /ldiffs not loaded

jvizier opened this issue · 18 comments

Describe the bug

I added some ldiff (root / ou / users) in a configmap or in the customLdifFiles value, the files was created in /ldiffs but nothing was done with ldapsearch or with phpldapadmin. When i add the files with ldapadd it's ok. I tried with different openldap versions.

To Reproduce
use customLdifFiles:
00-root.ldif: |-
dn: dc=example,dc=com
objectClass: dcObject
objectClass: organization
dc: example
o: Example

Expected behavior
ldiff added with customLdifFiles or configmap

Hi,

Can you try with LDAP_SKIP_DEFAULT_TREE: yes

Also if you want to change the top object you can set global.domain to example.com

Hello,

Already try LDAP_SKIP_DEFAULT_TREE. I try to debug logs but saw nothing. When openldap load the files from /ldiffs ?

When you say global.domain you talk about global.ldapDomain ?

Merci ;)

Yeah it’s indeed global.ldapDomain , you can use it to configure your domain easily

Actually you have to set global.ldapDomain as it set the mandatory LDAP_ROOT variable and is being used to setup the replication

Yes i already used that parameter, here a sample of my values.yaml :

global:
  imageRegistry: ""
  ldapDomain: dc=example,dc=com
  adminPassword: xxxxxxxxxxxxxxxxxxxxxxxxxxxx 
  configPassword: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
  ldapPort: 389
  sslLdapPort: 636

clusterDomain: cluster.local
replicaCount: 1
image:
  # From repository https://hub.docker.com/r/bitnami/openldap/
  repository: bitnami/openldap
  tag: 2.6.6
  pullPolicy: Always
  pullSecrets: []

logLevel: error


customTLS:
  enabled: false

env:
 BITNAMI_DEBUG: "false"
 LDAP_LOGLEVEL: "256"
 LDAP_CUSTOM_LDIF_DIR: "/ldifs"
 LDAP_TLS_ENFORCE: "false"
 LDAPTLS_REQCERT: "never"
 LDAP_ENABLE_TLS: "no"
 LDAP_CONFIG_ADMIN_ENABLED: "yes"
 LDAP_CONFIG_ADMIN_USERNAME: "admin"
 LDAP_SKIP_DEFAULT_TREE: "yes"
 LDAP_ADD_SCHEMAS: "yes"
 LDAP_EXTRA_SCHEMAS: "core,cosine,duaconf,dyngroup,inetorgperson,misc,nis,openldap"

customLdifFiles:
  00-root.ldif: |-
    dn: dc=example,dc=com
    objectClass: dcObject
    objectClass: organization
    dc: example
    o: example

  01-users.ldif: |- 
    dn: ou=users,dc=example,dc=com
    objectClass: organizationalUnit
    ou: domains

  03-user.ldif: |-
    dn: cn=postfix,ou=users,dc=mydomain,dc=com
    cn: postfix
    objectClass: simpleSecurityObject
    objectClass: organizationalRole
    userPassword: {SSHA}xxxxxxxxxxxxxxxxxxxxx
        
replication:
  enabled: false
persistence:
  enabled: true
  accessModes:
    - ReadWriteOnce
  size: 2Gi
  storageClass: "nfs-csi"

podSecurityContext:
  enabled: true
  fsGroup: 1000

containerSecurityContext:
  enabled: true
  runAsUser: 1000
  runAsNonRoot: true

podAntiAffinityPreset: soft

phpldapadmin:
  enabled: false

Can you try with for the organisation :

dn: dc=test,dc=example
dc: test
o: Example Inc.
objectclass: top
objectclass: dcObject
objectclass: organization

The user3 should be :


03-user.ldif: |-
    dn: cn=postfix,ou=users,dc=example,dc=com
    cn: postfix
    objectClass: simpleSecurityObject
    objectClass: organizationalRole
    userPassword: {SSHA}xxxxxxxxxxxxxxxxxxxxx

I also note that you define LDAP_EXTRA_SHEMAS. This is set by the chart so you shouldn't define it

Hello,

I change the org and user and comment the LDAP_EXTRA_SCHEMAS, same thing, i can login from ldapsearch but nothing return, the org is not created.

with docker-compose it's ok, ldif are imported

Set BITNAMI_DEBUG: "true" and look at logs again. In my case I wrongly assumed that the custom ldifs get replaced on pod restart. What happens though is that if an error happens on first initialization the bitnami image skips initialization on restart since it does not cleanup the LDAP_DATA_DIR. So I had to fix the error and delete the pvc, restart pod to reinit the database correctly.

Hello @jvizier,
I have install openldap with your config and i have always error and ldit not import :/

My config :

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: ${name}
  annotations:
    fluxcd.io/automated: "true"
spec:
  releaseName: ${name}
  timeout: 1m
  interval: 10m
  chart:
    spec:
      chart: openldap-stack-ha
      version: 4.1.2
      sourceRef:
        kind: HelmRepository
        name: openldap
        namespace: ${namespace}
      interval: 1m
  values:
    #
    #    DEFAULT VALUE YAML
    #    https://github.com/jp-gouin/helm-openldap/blob/master/values.yaml
    #
    global:
      ldapDomain: "${external_domain}"
      adminPassword: ${ldap_password}
      configPassword: "${my_password}"
    image:
      tag: 2.6.6 # {"$imagepolicy": "openldap:openldap:tag"}
    logLevel: debug
    commonAnnotations:
      reloader.stakater.com/match: "true"
    replicaCount: 1
    env:
       BITNAMI_DEBUG: "false"
       LDAP_LOGLEVEL: "256"
       LDAP_CUSTOM_LDIF_DIR: "/ldifs"
       LDAP_TLS_ENFORCE: "false"
       LDAPTLS_REQCERT: "never"
       LDAP_ENABLE_TLS: "no"
       LDAP_SKIP_DEFAULT_TREE: "yes"
       LDAP_ADD_SCHEMAS: "yes"
    customLdifCm: openldap-ldif
    resources:
      requests:
        cpu: "100m"
        memory: "256Mi"
      limits:
        cpu: "2000m"
        memory: "512Mi"
    nodeSelector:
      kubernetes.io/arch: amd64
    podSecurityContext:
      enabled: true
      fsGroup: 1000
    containerSecurityContext:
      enabled: true
      runAsUser: 1000
      runAsNonRoot: true
    podAntiAffinityPreset: soft
    ltb-passwd:
      image:
        tag: 5.3.3 # {"$imagepolicy": "openldap:openldap-passwd:tag"}
      ingress:
        annotations:
          external-dns.alpha.kubernetes.io/target: ${external_domain}
          kubernetes.io/ingress.class: traefik
          cert-manager.io/cluster-issuer: letsencrypt-cloudflare
#          traefik.ingress.kubernetes.io/router.middlewares: authelia-authelia@kubernetescrd
          traefik.ingress.kubernetes.io/router.entrypoints: websecure
          traefik.ingress.kubernetes.io/router.tls: "true"
          gethomepage.dev/enabled: "true"
          gethomepage.dev/name: "${name_beautiful}"
          gethomepage.dev/description: "${description}"
          gethomepage.dev/group: "${group}"
          gethomepage.dev/icon: "${icon}"
        hosts:
          - "${sudomain_passwd}.${external_domain}"
          - "${sudomain_passwd}.${internal_domain}"
        path: /
    phpldapadmin:
      image:
        tag: 0.9.0 # {"$imagepolicy": "openldap:openldap-phpldapadmin:tag"}
      ingress:
        enabled: true
        annotations:
          external-dns.alpha.kubernetes.io/target: ${external_domain}
          kubernetes.io/ingress.class: traefik
          cert-manager.io/cluster-issuer: letsencrypt-cloudflare
#          traefik.ingress.kubernetes.io/router.middlewares: authelia-authelia@kubernetescrd
          traefik.ingress.kubernetes.io/router.entrypoints: websecure
          traefik.ingress.kubernetes.io/router.tls: "true"
          gethomepage.dev/enabled: "true"
          gethomepage.dev/name: "${name_beautiful}"
          gethomepage.dev/description: "${description}"
          gethomepage.dev/group: "${group}"
          gethomepage.dev/icon: "${icon}"
        hosts:
          - "${subdomain}.${external_domain}"
          - "${subdomain}.${internal_domain}"
        path: /

And SSL error In phpldapadmin :
image

I have check in "/ldifs" folder and all file load by ConfigMap is present but no load in LDAP server.

Looks like it's related to #136 , I suggest not using 2.6.6 until the chart is compatible

Hello @jvizier, I have install openldap with your config and i have always error and ldit not import :/
My config :

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: ${name}
  annotations:
    fluxcd.io/automated: "true"
spec:
  releaseName: ${name}
  timeout: 1m
  interval: 10m
  chart:
    spec:
      chart: openldap-stack-ha
      version: 4.1.2
      sourceRef:
        kind: HelmRepository
        name: openldap
        namespace: ${namespace}
      interval: 1m
  values:
    #
    #    DEFAULT VALUE YAML
    #    https://github.com/jp-gouin/helm-openldap/blob/master/values.yaml
    #
    global:
      ldapDomain: "${external_domain}"
      adminPassword: ${ldap_password}
      configPassword: "${my_password}"
    image:
      tag: 2.6.6 # {"$imagepolicy": "openldap:openldap:tag"}
    logLevel: debug
    commonAnnotations:
      reloader.stakater.com/match: "true"
    replicaCount: 1
    env:
       BITNAMI_DEBUG: "false"
       LDAP_LOGLEVEL: "256"
       LDAP_CUSTOM_LDIF_DIR: "/ldifs"
       LDAP_TLS_ENFORCE: "false"
       LDAPTLS_REQCERT: "never"
       LDAP_ENABLE_TLS: "no"
       LDAP_SKIP_DEFAULT_TREE: "yes"
       LDAP_ADD_SCHEMAS: "yes"
    customLdifCm: openldap-ldif
    resources:

.....

I have check in "/ldifs" folder and all file load by ConfigMap is present but no load in LDAP server.

Hi @m4dm4rtig4n I'm facing the same problem with using ConfigMap. Do you have a working solution for it? Many thanks ;-)

Yes, I use external helm template :

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: ${name}
  annotations:
    fluxcd.io/automated: "true"
spec:
  releaseName: ${name}
  timeout: 1m
  interval: 10m
  chart:
    spec:
      chart: app-template
      version: 2.2.0
      sourceRef:
        kind: HelmRepository
        name: bjw-s
        namespace: flux-infra
      interval: 1m
  values:
    #
    #    DEFAULT VALUE YAML
    #    https://github.com/bjw-s/helm-charts/blob/main/charts/library/common/values.yaml
    #
    service:
      main:
        type: LoadBalancer
        loadBalancerIP: 192.168.100.100
        ports:
          http:
            enabled: false
          ldap:
            enabled: true
            primary: true
            port: 389
            targetPort: 1389
            protocol: TCP
          ldap-ssl:
            enabled: true
            port: 636
            targetPort: 1636
            protocol: TCP
    defaultPodOptions:
      dnsConfig:
        options:
          - name: ndots
            value: "1"
      nodeSelector:
        kubernetes.io/arch: amd64
    controllers:
      main:
        enabled: true
        replicas: 1
        strategy: RollingUpdate
        rollingUpdate:
          unavailable: 1
          surge: 1
        revisionHistoryLimit: 3
        containers:
          main:
            image:
              repository: ${docker_image}
              tag: 2.6.6 # {"$imagepolicy": "openldap:openldap:tag"}
            resources:
              limits:
                memory: 512Mi
              requests:
                memory: 128Mi
            probes:
              liveness:
                enabled: false
              readiness:
                enabled: false
              startup:
                enabled: false
            env:
              - name: BITNAMI_DEBUG
                value: true
              - name: LDAP_ADMIN_USERNAME
                valueFrom:
                  secretKeyRef:
                    name: ldap-global
                    key: ldap_username
              - name: LDAP_ADMIN_PASSWORD
                valueFrom:
                  secretKeyRef:
                    name: ldap-global
                    key: ldap_password
              - name: LDAP_USERS
                valueFrom:
                  secretKeyRef:
                    name: ldap-global
                    key: ldap_username
              - name: LDAP_PASSWORDS
                valueFrom:
                  secretKeyRef:
                    name: ldap-global
                    key: ldap_password
              - name: LDAP_ROOT
                valueFrom:
                  secretKeyRef:
                    name: ldap-global
                    key: ldap_root
              - name: LDAP_ADMIN_DN
                valueFrom:
                  secretKeyRef:
                    name: ldap-global
                    key: ldap_admin_dn
              - name: LDAP_CUSTOM_LDIF_DIR
                value: /ldif
    persistence:
      data:
        enabled: true
        size: 8Gi
        accessMode: ReadWriteOnce
        globalMounts:
          - path: /bitnami/openldap
      ldif:
        enabled: true
        accessMode: ReadWriteOnce
        size: 30Gi
        existingClaim: ${name}-nfs

@cvalentin-dkt so you're using env variables to pass in your user and ldifs?

I have followed the advanced_examples in docs just now with 2.6.6 tag being supported and it gives me the This base cannot be created with PLA. in ldapadmin

Hello @jvizier, I have install openldap with your config and i have always error and ldit not import :/

My config :

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: ${name}
  annotations:
    fluxcd.io/automated: "true"
spec:
  releaseName: ${name}
  timeout: 1m
  interval: 10m
  chart:
    spec:
      chart: openldap-stack-ha
      version: 4.1.2
      sourceRef:
        kind: HelmRepository
        name: openldap
        namespace: ${namespace}
      interval: 1m
  values:
    #
    #    DEFAULT VALUE YAML
    #    https://github.com/jp-gouin/helm-openldap/blob/master/values.yaml
    #
    global:
      ldapDomain: "${external_domain}"
      adminPassword: ${ldap_password}
      configPassword: "${my_password}"
    image:
      tag: 2.6.6 # {"$imagepolicy": "openldap:openldap:tag"}
    logLevel: debug
    commonAnnotations:
      reloader.stakater.com/match: "true"
    replicaCount: 1
    env:
       BITNAMI_DEBUG: "false"
       LDAP_LOGLEVEL: "256"
       LDAP_CUSTOM_LDIF_DIR: "/ldifs"
       LDAP_TLS_ENFORCE: "false"
       LDAPTLS_REQCERT: "never"
       LDAP_ENABLE_TLS: "no"
       LDAP_SKIP_DEFAULT_TREE: "yes"
       LDAP_ADD_SCHEMAS: "yes"
    customLdifCm: openldap-ldif
    resources:
      requests:
        cpu: "100m"
        memory: "256Mi"
      limits:
        cpu: "2000m"
        memory: "512Mi"
    nodeSelector:
      kubernetes.io/arch: amd64
    podSecurityContext:
      enabled: true
      fsGroup: 1000
    containerSecurityContext:
      enabled: true
      runAsUser: 1000
      runAsNonRoot: true
    podAntiAffinityPreset: soft
    ltb-passwd:
      image:
        tag: 5.3.3 # {"$imagepolicy": "openldap:openldap-passwd:tag"}
      ingress:
        annotations:
          external-dns.alpha.kubernetes.io/target: ${external_domain}
          kubernetes.io/ingress.class: traefik
          cert-manager.io/cluster-issuer: letsencrypt-cloudflare
#          traefik.ingress.kubernetes.io/router.middlewares: authelia-authelia@kubernetescrd
          traefik.ingress.kubernetes.io/router.entrypoints: websecure
          traefik.ingress.kubernetes.io/router.tls: "true"
          gethomepage.dev/enabled: "true"
          gethomepage.dev/name: "${name_beautiful}"
          gethomepage.dev/description: "${description}"
          gethomepage.dev/group: "${group}"
          gethomepage.dev/icon: "${icon}"
        hosts:
          - "${sudomain_passwd}.${external_domain}"
          - "${sudomain_passwd}.${internal_domain}"
        path: /
    phpldapadmin:
      image:
        tag: 0.9.0 # {"$imagepolicy": "openldap:openldap-phpldapadmin:tag"}
      ingress:
        enabled: true
        annotations:
          external-dns.alpha.kubernetes.io/target: ${external_domain}
          kubernetes.io/ingress.class: traefik
          cert-manager.io/cluster-issuer: letsencrypt-cloudflare
#          traefik.ingress.kubernetes.io/router.middlewares: authelia-authelia@kubernetescrd
          traefik.ingress.kubernetes.io/router.entrypoints: websecure
          traefik.ingress.kubernetes.io/router.tls: "true"
          gethomepage.dev/enabled: "true"
          gethomepage.dev/name: "${name_beautiful}"
          gethomepage.dev/description: "${description}"
          gethomepage.dev/group: "${group}"
          gethomepage.dev/icon: "${icon}"
        hosts:
          - "${subdomain}.${external_domain}"
          - "${subdomain}.${internal_domain}"
        path: /

And SSL error In phpldapadmin : image

I have check in "/ldifs" folder and all file load by ConfigMap is present but no load in LDAP server.

this is the exact behaviour I get with the image 2.6.6-fix from @jp-gouin with replication disabled (as someone said replication is bugged)

@cvalentin-dkt so you're using env variables to pass in your user and ldifs?

I have followed the advanced_examples in docs just now with 2.6.6 tag being supported and it gives me the This base cannot be created with PLA. in ldapadmin

I mount a folder with all ldif file store in my nas (persistance/ldif) in /ldif of container and all file is load directly by openldap

@cvalentin-dkt so you're using env variables to pass in your user and ldifs?
I have followed the advanced_examples in docs just now with 2.6.6 tag being supported and it gives me the This base cannot be created with PLA. in ldapadmin

I mount a folder with all ldif file store in my nas (persistance/ldif) in /ldif of container and all file is load directly by openldap

interesting I just spawned a shell in the container of openldap-0 (no replicas) and saw all the correct files there.

	Logged in as: cn=admin,dc=mydomain,dc=com
 
 	
		Could not determine the root of your LDAP tree.
It appears that the LDAP server has been configured to not reveal its root.
Please specify it in config.php

by simply copy pasting the advanced example to the values file like:

# based on https://github.com/jp-gouin/helm-openldap
openldap-stack-ha:
  global:
    ldapDomain: "mydomain.com"
    existingSecret: "ldap-secret"
  
  replicaCount: 1
  replication.enabled: false

  logLevel: debug

  customTLS:
    enabled: false

  persistence:
    enabled: false

  env:
    LDAP_ALLOW_ANON_BINDING: "no"
    LDAP_SKIP_DEFAULT_TREE: "yes"

  # make sure dn in the following ACLs fits your domain
  customAcls: |- 
    dn: olcDatabase={2}mdb,cn=config
    changetype: modify
    replace: olcAccess
    olcAccess: {0}to *
      by dn.exact=gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth manage
      by * break
    olcAccess: {1}to attrs=userPassword,shadowLastChange
      by self write
      by dn="cn=admin,dc=mydomain,dc=com" write
      by anonymous auth by * none
    olcAccess: {2}to *
      by dn="cn=admin-read,dc=mydomain,dc=com" read
      by dn="cn=admin,dc=mydomain,dc=com" write
      by self read
      by * none

  customLdifFiles:
    00-root.ldif: |-
      dn: dc=mydomain,dc=com
      objectClass: top
      objectClass: dcObject
      objectClass: organization
      o: MY-DOMAIN
      dc: mydomain
    01-admin-read-user.ldif: |-
      dn: cn=admin-read,dc=mydomain,dc=com
      cn: admin-read
      mail: admin-read@mydomain.com
      objectClass: inetOrgPerson
      objectClass: top
      userPassword:: {SSHA}xxxxxxxxxxxx
      sn: Admin read only
    02-users-group.ldif: |-
      dn: ou=users,dc=mydomain,dc=com
      ou: users
      objectClass: organizationalUnit
      objectClass: top
      
  ltb-passwd: # self service password change web interface
    enabled: true
    ingress: # we do custom ingress using istio
      enabled: false
    ldap:
      bindDN: "cn=admin-read,dc=mydomain,dc=com" # make sure this is set to correct baseDN
      searchBase: "ou=users,dc=mydomain,dc=com" # make sure this is set to correct baseDN
      passKey: LDAP_ADMIN_READ_PASSWORD
    # check https://github.com/jp-gouin/helm-openldap/tree/master/advanced_examples#use-a-user-with-restricted-permissions-for-password-portal
    initContainers:
     - name: "install-logo"
       image: "{{ tpl .Values.image.repository . }}:{{ tpl .Values.image.tag . }}"
       command: [sh, -c]
       args:
         - |-
           cat <<EOF >/data/31-logo
           #!/command/with-contenv bash
           source /assets/functions/00-container
           PROCESS_NAME="logo"
           cp /tmp/ltb-logo.png /www/ssp/images/ltb-logo.png
           chmod +x /data/31-logo
           liftoff
           EOF
       volumeMounts:
         - name: data
           mountPath: /data
    volumes:
      - name: logos
        configMap:
          name: configmap-ldap-companylogos
      - name: data
        emptyDir: {}
    volumeMounts:
      - name: logos
        mountPath: /tmp/ltb-logo.png
        subPath: my-logo.png
      - name: data
        mountPath: /etc/cont-init.d/31-logo
        subPath: 31-logo

  phpldapadmin: # web admin interface to manage ldap
    enabled: true
    ingress: # we do custom ingress using istio
      enabled: false
    # check https://github.com/jp-gouin/helm-openldap/tree/master/advanced_examples#use-a-user-with-restricted-permissions-for-password-portal
    initContainers:
     - name: modify-configuration
       image: "{{ tpl .Values.image.repository . }}:{{ tpl .Values.image.tag . }}"
       command: [sh, -c]
       args:
         - |-
           # modify startup script in order to use logos
           cp -p /container/service/phpldapadmin/startup.sh /data/
           sed -i -e 's/exit 0/# exit 0/' /data/startup.sh
           cat <<'EOF' >>/data/startup.sh
           cp /logos/my-logo.png /var/www/phpldapadmin/htdocs/images/default/logo.png
           cp /logos/my-logo_50.png /var/www/phpldapadmin/htdocs/images/default/logo-small.png
           exit 0
           EOF
       volumeMounts:
         - mountPath: /data
           name: data
    volumes:
      - name: data
        emptyDir: {}
      - name: logos
        configMap:
          name: configmap-ldap-companylogos
    volumeMounts:
      - name: data
        mountPath: /data
      - name: logos
        mountPath: /logos
      - name: data
        mountPath: /container/service/phpldapadmin/startup.sh
        subPath: startup.sh

included full values.yaml for debugging @jp-gouin

Hi @SamuelLHuber please try with the latest release v4.2.2
If it's not working please open a new issue and provide your values and the log of the openldap-0 instance (the first boot so make sure to use --previous if the pod has failed

will create a new one with all the details

Hi @SamuelLHuber please try with the latest release v4.2.2 If it's not working please open a new issue and provide your values and the log of the openldap-0 instance (the first boot so make sure to use --previous if the pod has failed