flux-iac/tofu-controller

flux tf-controller dependency management with git repository source

Closed this issue · 5 comments

jmuma1 commented

flux tf-controller dependency management with git repository source

PULL REQUEST: jmuma1/tf-controller-muma#2

Using this link (https://docs.gitops.weave.works/docs/terraform/using-terraform-cr/depends-on/) on leveraging depending management with the terraform flux tf-controller, I am trying to apply a yaml file called tf-resources.yaml in order to create an aws s3 bucket (random/main.tf), and then create in a separate directory (so-random/main.tf), its associated s3 bucket acl, which depends on the id of the s3 bucket created in the 'random/main.tf' directory. However, the example in the link uses an OCI Repository as its source. For my use case, my source is a Git Repository, so the example in the docs is not totally applicable for me. I made changes where necessary to adapt the example to my Git Repository source. When I run [tfctl get] to look at the status of [terraform plan/apply] for the 2 resources in tf-resources.yaml, I eventually get a 'terraform plan' error via the tf-controller when it is time to create the second resource (aws-s3-bucket-acl); [ error running Plan: rpc error: code = Internal desc = variable "id" was required but not supplied ]. I am not sure why since I have provided a variables.tf file with a variable name that matches the variable name I provide in the [as: ] field in tf-resources.yaml

FOR MORE CONTEXT ON HOW I GET TO THIS POINT:

  • I create an EKS cluster with flux installed in it (The terraform files for this are not provided)
  • I run [ aws eks update-kubeconfig --region us-east-1 --name "name of eks cluster" --dry-run > ~/.kube/config ] to update the certificate in the kubeconfig file
  • I run [ kubectl apply -f tf-controller.yaml ] to create the tf-controller that leverages terraform commands
  • Install flux binary
  • Install tfctl cli
  • Finally, I run [ kubectl apply -f tf-resources.yaml] to attempt to create the aws-s3-bucket and its dependent aws-s3-bucket-acl
  • ***NOTE: In tf-resources.yaml, I removed the secrets/Private Key for security reasons. If anyone plans to pull this down locally and try to recreate my steps, let me know and I will provide a new secret/private key to use for this issue
  • The aws-s3-bucket is successfully created as it shows up in the console and after running [ tfctl get ] a couple times to the status update on the terraform plan/apply, tfctl eventually outputs [ Outputs written: main@sha1:45646546546544456458zfghghfgghfhfghfgh8g ]
  • When I check the details of the aws-s3-buckets-outputs secret that gets created, I see the base64 encoded id is in there. To further confirm that the right bucket id is outputted and stored in this aws-s3-buckets-outputs secret, I use a base64 converter, which assures me the converted code is the desired name/id of the s3 bucket
  • But the second part of [tfctl get] output says [ error running Plan: rpc error: code = Internal desc = variable "id" was required but not supplied ]
  • The containers then terminate and try to recreate, so the terraform workflow cycle repeats
  • Please Share Thoughts and Help Debug from this Pull Request: jmuma1/tf-controller-muma#2

Hi @jmuma1 as: id that you specified in YAML works only at the YAML level and you have to refer to it via template for example.

Hi @jmuma1 if you can share the tf-resources.yaml file it would help to identify what is wrong

jmuma1 commented

Hi @jmuma1 as: id that you specified in YAML works only at the YAML level and you have to refer to it via template for examp

jmuma1 commented

Hi @jmuma1 as: id that you specified in YAML works only at the YAML level and you have to refer to it via template for example.

Hi. I solved the variable 'id' issue. I had to use varsFrom and varsKeys instead in the yaml file. But I have an issue now where I commented out the s3 resources in the .tf file and I am trying to apply a vpc and an associated subnet instead. For some reason, the tf-controller still creates an s3 bucket although I have commented that out. I deleted all the state file secrets and nuked my account and redeployed but the tf-controller still recognizes the s3 bucket resources and ignores the vpc and subnet

jmuma1 commented

Hi @jmuma1 if you can share the tf-resources.yaml file it would help to identify what is wrong

Hi. I linked the pull request with the necessary files. But here you go: jmuma1/tf-controller-muma#2. However, I solved the variable 'id' issue. I had to use varsFrom and varsKeys instead in the yaml file. But I have an issue now where I commented out the s3 resources in the .tf file and I am trying to apply a vpc and an associated subnet instead. For some reason, the tf-controller still creates an s3 bucket although I have commented that out. I deleted all the state file secrets and nuked my account and redeployed but the tf-controller still recognizes the s3 bucket resources and ignores the vpc and subnet