jckuester/awsrm

Using `awsls ... | awsrm` pipe spawns terraform provider process for each resource and crashes

Veetaha opened this issue · 4 comments

Here is what I see when I use a pipe:

Output logs
$ awsls aws_lambda_function | awsrm --debug

   • input via pipe           
   • found already installed Terraform provider name=aws path=/home/veetaha/.awsrm/terraform-provider-aws_v3.42.0_x5 version=3.42.0
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2
   • start launching new instance of Terraform AWS Provider profile=N/A region=us-east-2

Error: failed to configure provider (name=aws, version=3.42.0): error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.

Please see https://registry.terraform.io/providers/hashicorp/aws
for more information about providing credentials.

Error: NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors

This also leaks all the terraform provider processes, so they are left hanging and eating my PC RAM:

image

However, if I invoke the same command, but with all the resource identifiers as CLI parameters, everything works fine, only one terraform provider process is created and no NoCredentialProviders error is triggered, and no processes are leaked

@jckuester isn't #7 fixing this issue?

Also, related question, why 0.4.0 isn't just a patch release?

Yes, 0.4.0 fixes the issue that you reported, @Veetaha. Thank you for that :-) And you are right, it should have been a patch release, my bad.

Okay, nevermind, so do you want to close this issue?