/cloudera-iac

Primary LanguageDockerfileApache License 2.0Apache-2.0

cloudera-iac

This repo hold all the lifecycle steps for deploying Cloudera Hadoop cluster in VMs
There are two main modes:

  1. mode where provisioning of VMs is also part of the lifecycle steps
  2. mode where VMS are already created and so this step is skipped

The governing stack: Gradle, Conda, Docker, Ansible

modules

ansible-conda-pack

This Gradle module is generating a Conda pack archive for Ansible which is generated inside Docker container, While the container run, Conda pack artifact is being generated and exported to local system, later to be used by Docker image for Ansible Controller.

ansible-ctrl

This Gradle module creates Docker image for Ansible Controller

provision

This Gradle module uses Docker Compose to spin up VMs to be later used for Cloudera Hadoop cluster

usage

sandbox scenraio

  1. building docker images

    1. build from scratch control image (:ansible-conda-pack, :ansible-ctrl)

      ./gradlew clean prepare
    2. Alternatively

      1. building first :ansible-conda-pack docker image and exporting intermidiate artifact

        ./gradlew ansible-conda-pack:clean ansible-conda-pack:docker ansible-conda-pack:dockerRun
        # If you wouldlike to remove container in the end (not recommended currently as gradle cannot keep state correctly 
        # without container exsistance)
        ./gradlew ansible-conda-pack:clean ansible-conda-pack:docker ansible-conda-pack:dockerRun ansible-conda-pack:dockerRemoveContainer
      2. and then building :ansible-ctrl docker image after :ansible-conda-pack artifact created already

        ./gradlew ansible-ctrl:clean ansible-ctrl:docker ansible-ctrl:dockerRun
        # If you wouldlike to remove container in the end (not recommended currently as gradle cannot keep state correctly 
        # without container exsistance)
        ./gradlew ansible-ctrl:clean ansible-ctrl:docker ansible-ctrl:dockerRun ansible-ctrl:dockerRemoveContainer
    3. building :ansible-managed docker image after :ansible-ctrl artifact created already

      ./gradlew ansible-managed:clean ansible-managed:docker --info
  2. provisioning of VMs using docker-compose uo

    ./gradlew provision:generateDockerCompose provision:dockerComposeUp
  3. Running playbook

    1. Attach shell to ansible-ctrl container in compose project build

    2. Run playbook

      cd ~
      ansible-playbook -i ansible_hosts.yml git/cloudera-playbook/site.yml --extra-vars "krb5_kdc_type=none" --skip-tags krb5 --ask-vault-pass

      NOTES:

      • cloudera_archive_authn is encrypted and set in group_vars
      • this is applicable for a non secure cluster!
  4. unprovision VMs using docker-compose down

    ./gradlew provision:dockerComposeDown

private cloud/bare metal scenraio

  1. building docker images

    1. build from scratch control image (:ansible-conda-pack, :ansible-ctrl)

      ./gradlew clean prepare
    2. Alternatively

      1. building first :ansible-conda-pack docker image and exporting intermidiate artifact

        ./gradlew ansible-conda-pack:clean ansible-conda-pack:docker ansible-conda-pack:dockerRun
        # If you wouldlike to remove container in the end (not recommended currently as gradle cannot keep state correctly 
        # without container exsistance)
        ./gradlew ansible-conda-pack:clean ansible-conda-pack:docker ansible-conda-pack:dockerRun ansible-conda-pack:dockerRemoveContainer
      2. and then building :ansible-ctrl docker image after :ansible-conda-pack artifact created already

        ./gradlew ansible-ctrl:clean ansible-ctrl:docker ansible-ctrl:dockerRun
        # If you wouldlike to remove container in the end (not recommended currently as gradle cannot keep state correctly 
        # without container exsistance)
        ./gradlew ansible-ctrl:clean ansible-ctrl:docker ansible-ctrl:dockerRun ansible-ctrl:dockerRemoveContainer
  2. provisioning of ansible-ctrl VM using docker-compose uo

    ./gradlew provision:priv-cloud:generateDockerCompose provision:priv-cloud:dockerComposeUp
  3. Running playbook

    1. Attach shell to ansible-ctrl container in compose project build

    2. Ensure build subnet can reach target hosts in priv-cloud

    3. decrypt pk file for ssh communication

      cd ~
      .local/bin/decrypt_pk.sh
    4. Run playbook

      cd ~
      ansible-playbook -i ansible_hosts.yml git/cloudera-playbook/site.yml --extra-vars "krb5_kdc_type=none" --skip-tags krb5 --ask-vault-pass --private-key private_key.txt

      NOTES:

      • this is applicable for a non secure cluster!

ansible-vault

  • creating encrypted variable string value

    ansible-vault encrypt_string --vault-id dev@prompt 'foobar' --name 'variable_name'

Extra

systemd inside Docker container

  • Under WSL
    In WSL there is no systemd and so /sys/fs/cgroup/systemd is not present.
    This location is a dependency for running Docker container with systemd as this directory is shared with host using volume mounting.
    In this case systemd dir needs to be created:

    sudo mkdir /sys/fs/cgroup/systemd/

Useful Ad-Hoc Ansible commands

  • useful link: ansible-ad-hoc-commands

  • fact gathering

    # All facts
    ansible -i ansible_hosts all -m setup -v --user <user> --key-file <id_rsa_user> > details.out
    # Filtered view
    ansible -i ansible_hosts all -m setup -a 'filter=ansible_*_mb' -v --user <user> --key-file <id_rsa_user> > details.out
  • sudoers rights

    ansible -i ansible_hosts all -m shell -a "sudo -l" -v --user <user> --key-file <id_rsa_user> > sudo.out
  • test become root functionality

    # When sudoers is set no need for sudo password
    ansible -i ansible_hosts_1 all -m shell -a "cat /etc/passwd" -b -v --user <user> --key-file <id_rsa_user> > become.out
    # Otherwise
    ansible -i ansible_hosts_1 all -m shell -a "cat /etc/passwd" -b -K -v --user <user> --key-file <id_rsa_user> > become.out