aws/aws-cli

Ability to specify endpoint-url in profile

StFS opened this issue Β· 101 comments

StFS commented

Currently I don't seem to be able to specify an endpoint URL in my profile. I always have to specify the --endpoint-url option on the command line.

I would like to be able to do something like the following (in my ~/.aws/config file):

[default]
region = us-east-1
output = json

[profile nextcode]
region = myregion-1a
output = json
endpoint-url = http://c.my.other.aws.compatable.service.com

@StFS
If you do not mind me asking, what is the specific use case do you need it for (like which service or services would you plan to use it for)? The thing with --endpoint-url is that the value set will most likely not apply across differing services. So if you configure a global endpoint url, it may work for one service, but you will run into errors if you try to make requests to other services.

I could see having a per-service configuration for endpoint-url, but I am having trouble imagining a global endpoint-url across all AWS services.

StFS commented

Thanks for your reply.

Basically, we use AWS for some things but we also have our own EC2/S3 compatable private cloud setup (we're using https://qstack.com/).

Maybe I'm misunderstanding something with this. I know that the endpoints seem to differ between whether you're "talking" S3 or EC2. Basically I think I want to be able to point one of my profiles to a different "set" of endpoints. The aws cli tool works fine for our AWS account, but when I want to use it for our private cloud setup I always have to specify both --profile (to get the credentials right) and --endpoint-url (so that aws contacts our private cloud endpoint instead of the AWS ones).

StFS commented

@kyleknap Am I totally out there with this?

@StFS
Your use case makes sense to me. I am not sure how many users have a similar use case. I think if we were to add it as a feature we would be more inclined to do it per service, instead of setting a global value for endpoint url. The good news is that if we were to add this feature you would be able to configure the endpoint url for each service per profile (so you can still just use --profile and not have to include --endpoint-url).

Marking as feature request.

StFS commented

Ok thanks for the response again.

I'm a bit confused about these endpoints. It seems that for AWS the endpoint may change depending on what region you're using and also what service you're using.

For us, we would only need to specify ec2_endpoint and s3_endpoint, is that what you mean by "per service"? Are there other services than EC2 and S3? I think I've seen something involving user management as well but I'm not quite sure.

We have a similar use case (S3 alternative in Canada), and we would love to be able to configure the endpoint-url in the config file. Even if that means having a separate profile per service (currently it is only used for object storage), it would allow us to use the same commands in all of our environments.

kennu commented

I think this would also be useful for configuring a "local" profile for accessing DynamoDB Local. Currently you have to write something like

aws --profile local --endpoint-url http://localhost:8000 dynamodb list-tables

Same case here, @kennu. I have to set endpoint-url everytime I need to run some command on dynamodb-local.

I would love to set http://localhost:8000 as my default endpoint-url for dynamodb and set a blank endpoint-url to an production profile. It's more error-proof.

How to set dyanmodb local endpoint-url ? I am using ubuntu OS.

+1 for feature-request.

I am currently using aws cli to access minio.io server and specifying --endpoint-url every time is a pain.

I need this also, connecting to local dnyamodb and I don't want to specify the endpoint for each cmd, simple entry into the config will work for me so I can concentrate on the command. thanks.

+1 for this feature of allowing endpoint-url in the configuration file.

if accepting endpoint-url on the top level doesn't make sense, what about accepting it on service level?
eg:

[profile development]
aws_access_key_id=foo
aws_secret_access_key=bar
s3 =
  endpoint-url = test.org

Similar use case, pointing to a Eucalyptus cloud. Would be nice to have a per-service config for endpoint-url.

How about also for the convenience of not having to type the --endpoint on the cmdline?

+1

We have a third-party library that (as a minor aspect of its remote data access facilities) itself parses ~/.aws/credentials in order to construct the HTTP headers to access data in S3 buckets. Similarly to this issue, our users want to access their own S3-compatible data stores (cf samtools/htslib#436) and it would be useful if there were a standard well-known configuration file setting name for this purpose.

πŸ‘

We also have this use case, except our S3-like-service has a different authorisation scheme (a simple authorization: <token> header). How are people using aws-cli to authorise against their S3-like-services?

Check out this awscli plugin to set endpoint on profile: https://github.com/wbingli/awscli-plugin-endpoint

Once you install(pip install awscli-plugin-endpoint) and config plugin(aws configure set plugins.endpoint awscli_plugin_endpoint), you can set endpoint in your profile per service as following:

[profile local]
dynamodb =
    endpoint_url = http://localhost:8000

Now you can run command with this endpoint for this service with only profile name:

aws dynamodb list-tables --profile local

See more details on project homepage(https://github.com/wbingli/awscli-plugin-endpoint).

Let me know your feedback, :)

Just to be clear, this is not only a benefit for people who are running AWS-compatible competitive products. It would also benefit those of us who are trying to use Amazon tools for local/offline development. http://stackoverflow.com/a/32260680/117471 is an example with DynamoDB. Using different profiles allows us to run our code with different configs for different environments. Not allowing endpoint_url to be specified in ~/.aws/config means that we have to build logic into our apps to follow a different (although small) path depending on the environment. That is something that should be avoided.

kennu commented

One such toolset is https://github.com/atlassian/localstack which provides local mocks for API Gateway, Kinesis, DynamoDB, DynamoDB Streams, Elasticsearch, S3, Firehose, Lambda, SNS, SQS and Redshift. It would be highly useful to create an AWS profile for deploying e.g. Serverless services to this mock platform and use this profile like any profile in deployment scripts.

Hello,

I am catering to multiple customers and each customer provided me a different endpoint url to access their accounts. Example below -
https://customer-1.signin.aws.amazon.com/console
https://customer-2.signin.aws.amazon.com/console

If I try to connect using aws cli -
> aws ec2 describe-instances --profile customer1

I am receiving below error -
Could not connect to the endpoint URL: "https://ec2.us-east-1a.amazonaws.com/"

Is it possible to add something like this?
[profile customer1]
region = us-east-1
output = json
account = customer1

[profile customer2
region = us-east-1a
output = json
account = customer2

Something like AWS_ENDPOINT_URL would also be fantastic.

I was very surprised to find this wasn't already a feature just as a matter of course. Is it any different from being able to specify the region in your config or as an option? I'd expect it to be a fairly trivial change in the same general code path.

yyolk commented

I would also love what @elsonrodriguez mentions

Another use case not mentioned and comes to mind for me is setting up an [elasticmq] profile for local sqs dev with elasticmq "local dev" ... no internet connection ;)

you can do below work around

from boto3 import Session
Session.client.defaults = (None, None, False, None, 'http://localhost:4575', None, None, None, None)

@fatihtekin The whole question here is how to not do this but rather have it picked up from config.

@aldanor i totally agree that is why it is a workaround if someone needs for testing purposes
especially without changing the lambda implementation

+1 for this feature. Given the rise of non-AWS providers providing AWS-like functionality with compatible APIs, having this functionality would be very useful.

tfili commented

+1

Good Morning!

We're closing this issue here on GitHub, as part of our migration to UserVoice for feature requests involving the AWS CLI.

This will let us get the most important features to you, by making it easier to search for and show support for the features you care the most about, without diluting the conversation with bug reports.

As a quick UserVoice primer (if not already familiar): after an idea is posted, people can vote on the ideas, and the product team will be responding directly to the most popular suggestions.

We’ve imported existing feature requests from GitHub - Search for this issue there!

And don't worry, this issue will still exist on GitHub for posterity's sake. As it’s a text-only import of the original post into UserVoice, we’ll still be keeping in mind the comments and discussion that already exist here on the GitHub issue.

GitHub will remain the channel for reporting bugs.

Once again, this issue can now be found by searching for the title on: https://aws.uservoice.com/forums/598381-aws-command-line-interface

-The AWS SDKs & Tools Team

This entry can specifically be found on UserVoice at : https://aws.uservoice.com/forums/598381-aws-command-line-interface/suggestions/33168307-ability-to-specify-endpoint-url-in-profile

+1

Based on community feedback, we have decided to return feature requests to GitHub issues.

This would be incredibly useful for those that are interacting with GovCloud.

+1

+1

This would also be helpful when working with Snowball Edges.

Yes, It is awesome that so many people expect this feature. I am using third party s3 provider, and the quality of command line tools which is developed by third party is not so good. we are looking forward amazon official version that satisfies our requirement.

well, the other way to implement this feature indirectly is:
set an alias for your general command. such as:
modify your ~/.bash_profile
add following line:
alias awsoss='aws s3 --endpoint-url=http://127.0.0.1:9000'
then you can use awsoss to replace aws command for accessing s3.

if you have multiple endpoints, you can also use:
alias awsoss='aws s3 --endpoint-url=$OSS_ENDPOINT'
alias awsossapi='aws s3api --endpoint-url=$OSS_ENDPOINT'

use: export OSS_ENDPOINT=xxx to switch different endpoints
then, you can use awsoss and awsossapi to instead of "aws s3" and "aws s3api"

+1 for this feature

+1 from me as well. We'd really like to leverage per-service endpoint URLs for development and testing.

onraz commented

+1 This is needed for local development/testing and Dockerized environments.

+1 for this

+1 for this feature.

This is will be a useful feature for interacting with EC2 instances on a Snowball Edge (which act like a local region).

To list the EC2 instances on a snowball I have to specify the endpoint even though I already have a AWS CLI Named Profile that has the rest of the Snowball Edge specific info (secret / access keys):

aws ec2 describe-instances --profile snowballEdge --endpoint http://${snowball_ip}:8008

+1 to this and environment variable for the same (e.g. AWS_ENDPOINT or AWS_S3_ENDPOINT)

+1 for this feature

Yeah this would be useful for local integration testing of other people's code

+1 from me too. For different or no endpoint url, I'd set up a different config profile.

+1 here.

+1 for this. It's very important for offline development. We use S3 in production, but are wanting to create self-contained development environments for local development that use an S3 clone.

For those still having issues, I've had success using this profile configuration with the help of the awscli_plugin_endpoint. Link to that here: https://github.com/wbingli/awscli-plugin-endpoint

If you aren't using valid SSL you can also use the verify_ssl=false flag. https://github.com/wbingli/awscli-plugin-endpoint/blob/master/README.md

[plugins]
endpoint = awscli_plugin_endpoint

[default]
output = text
region = mycustomregion

ec2 =
endpoint_url = https://urlhere/api/etc

elbv2 =
endpoint_url = https://urlhere/api/etc

rds =
endpoint_url = https://urlhere/api/etc

emr =
endpoint_url = https://urlhere/api/etc

efs =
endpoint_url = https://urlhere/api/etc

acm =
endpoint_url = https://urlhere/api/etc

+1 Would really increase the usability.

Hey folks, please stop commenting unless there is additional information to share, it spams up the notifications. You can subscribe to this issue by clicking the button at the bottom.

kennu commented

I have been enjoying these +1 notifications for almost 4 years now, why stop now? ;-)

I will spam until this issue it is done πŸ˜†

@mchelen it'd at least be nice to know if there was additional information to share..... ;-)

But rather than complain, how would you suggest someone begin working on this issue? I'm not terribly familiar with the codebase for this project but a quick search confirms that it seems like there's a few references to a parsed_globals.endpoint_url, so I assume it might have something to do with that.

πŸ‘

πŸ‘

Hello @wbingli ,
I am trying to use awscli-plugin-endpoint within a computational workflow that uses boto3 to download files from a DigitalOcean spaces (s3-compatible) instance. My .aws/config is of the form:

[plugins]
endpoint = awscli_plugin_endpoint

[profile foo]
region = ams3
s3 =
   endpoint_url = https://ams3.digitaloceanspaces.com

and aws s3 --profile ceat list-buckets works correctly. Nevertheless, when I use the Python interpreter, I get the following:

> import boto3
> client = boto3.Session(profile_name='foo').client('s3')
> client._client_config.s3
{'endpoint_url': 'https://ams3.digitaloceanspaces.com'}

which seems so far so good, nevertheless, if I then do:

> client.head_object(Bucket='bar', Key='some_file.csv')
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://bar.s3.ams3.amazonaws.com/some_file.csv"

Shouldn't this work?

Note that if I change the line after the import

> client = boto3.Session(profile_name='foo').client('s3', 'https://ams3.digitaloceanspaces.com')

everything works properly. Any suggestions? Is this an aws-cli issue? an issue of the plugin? or rather a boto3/botocore issue?

Thank you in advance,
MartΓ­

Hopefully someone can easily just mirror the changes from the php sdk

https://github.com/aws/aws-sdk-php/pull/1243/files/aeb25899897fe746580ae6c80e0df6ad37931126

https://github.com/wbingli/awscli-plugin-endpoint
Problem solved in this project. Check it out.

  1. install https://github.com/wbingli/awscli-plugin-endpoint
  2. vim ~/.aws/config and add "endpoint_url="

Example:

[default]
region = ap-northeast-1
s3 =
    endpoint_url=http://localhost:8333
    signature_version = s3v4
s3api =
    endpoint_url=http://localhost:8333

[plugins]
endpoint = awscli_plugin_endpoint

@BruceWangNo1 this is a workaround, not a solution

@mrPsycho Yes, this is a workaround, not a solution.

My use case for this is going into a large customer VPC which they have restricted Internet access and only being given room in that VPC. I have to run all my requests through a VPC endpoint. This is fine for S3 since it just routes, but this specifying specific --endpoint-url for each service a pain in the neck.

I agree with all the use-cases around and wanted to take a moment to summarize them and make sure to add mine.

  • Using private clouds or 3rd party clouds that have AWS API compatibly services.
  • Local Development with tools like localstack
  • AWS VPC private endpoints. Allows accessing AWS services with endpoints only available from within your VPC and not having to traverse public internet

The overall goal managing a config file and not having to inject variables into shell commands or scripts depending on where code is deployed making that code a lot harder to be easily portable.

Seems the plugin awscli-plugin-endpoint only works with AWS Commond line (aws cli). When test with AWS sdk, such as Python boto3, it doesn't understand the [plugin] setting in config file.

any hints for me?

$ more ~/.aws/credentials

# localstack test
# https://github.com/localstack/localstack
[localstack]
region                = ap-southeast-2
aws_secret_access_key = localstack
aws_access_key_id     = localstack

$ more ~/.aws/config
[plugins]
endpoint = awscli_plugin_endpoint

[profile localstack]
s3 =
    endpoint_url = http://localhost:4572
kms =
    endpoint_url = http://localhost:4599
lambda =
    endpoint_url = http://localhost:4574
iam =
    endpoint_url = http://localhost:4593
kinesis =
    endpoint_url = http://localhost:4568
logs =
    endpoint_url = http://localhost:4586
sts =
    endpoint_url = http://localhost:4592
ec2 =
    endpoint_url = http://localhost:4597

# Start localstack service 

$ export AWS_PROFILE=localstack

$ aws s3 mb s3://test

$ aws s3 ls
2020-02-12 23:55:26 test

# But if I test with python boto3 sdk: 

$ cat s3.py
import boto3
s3 = boto3.client('s3')
response = s3.list_buckets()
buckets = [bucket['Name'] for bucket in response['Buckets']]
print("Bucket List: %s" % buckets)

$ python s3.py
Traceback (most recent call last):
  File "s3.py", line 3, in <module>
    response = s3.list_buckets()
  File "/home/vagrant/.local/lib/python3.6/site-packages/botocore/client.py", line 276, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/home/vagrant/.local/lib/python3.6/site-packages/botocore/client.py", line 586, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records.

Possibly worth noting that localstack have moved to a single port (default 4566), so they use the same URL for all services now.

For now adding alias awslocal="aws --endpoint-url http://localhost:4566" to a shell profile is a pretty good UX though.

Thanks, @mcintyre94

This new feature in localstack looks great, I don't need to set endpoint_url for each service now, it can help to simplify the codes a lot.

import boto3

ENDPOINT_URL='http://localhost:4566'

client = boto3.session.Session()

# set the service names here:
try:
  ENDPOINT_URL
  s3 = client.client('s3', endpoint_url=ENDPOINT_URL, verify=None, region_name='ap-southeast-2')
except NameError:
  s3 = client.client('s3', region_name='ap-southeast-2')

s3.create_bucket(Bucket='test')
response = s3.list_buckets()
buckets = [bucket['Name'] for bucket in response['Buckets']]
print("Bucket List: %s" % buckets)

+1

+1

Fixing this would also be helpful for interfacing with the s3 endpoint on our snowball edge.

Also running into this with snowball edge usage.

This would also make things convenient for other S3 compatible object stores (I'm looking at you Ceph and Pure).

This would be very useful with my current use case.

My problem is that I cannot tell my app to connect to a containerized S3 in tests without having to modify the production code.

I have a project that uses SQS, S3 and some other resources. For the integration suite I use a container for each service, to keep it as truth as possible.
For the resource creation I mount a directory with credentials in the app container so I can do:

docker run --rm -i -v $(PWD)/aws-test:/root/.aws --network=tests_default amazon/aws-cli --endpoint-url=$(AWS_SQS_ENDPOINT_URL_DEV):$(AWS_SQS_PORT_DEV) sqs create-queue --queue-name=$(SOME_QUEUE_NAME) > /dev/null

But in the code I instantiate resources / clients like this:

s3 = boto3.resource('s3', use_ssl=use_ssl)
...
s3 = boto3.client('s3', use_ssl=use_ssl)

I would like to be able to specify from the outside context wether I want to connect to a mock S3. Because what I'm trying to avoid is having code like this on my app:

if testing:
  s3 = boto3.resource('s3', endpoint_url='something', use_ssl=use_ssl)
else:
  s3 = boto3.resource('s3', use_ssl=use_ssl)

This issue has been open for 5 years, it is one of the most commented on and upvoted features in this repo, yet i still haven't heard a peep from AWS to incorporate this.

For those using localstack, you can use this tool to access localstack via awslocal instead of aws --endpoint-url=http://localhost:4566.

For those interested in a workaround until this feature is implemented:

import os
import re
from configparser import ConfigParser

import boto3
RE = re.compile('(\S+) = (".*?"|\S+)')


def boto3_resource(service_name, *, profile_name='default', **kwargs):
    """Retrieves boto3.resource, and fills in any service-specific    
    parameters from your config, which can be specified either        
    in AWS_CONFIG_FILE or ~/.aws/config (default). Similarly,         
    profile_name is 'default' (default) unless AWS_PROFILE is set.    
                                                                      
    Assumes that additional service-specific config is specified as:  
                                                                      
    [profile_name]                                                    
    service-name =                                                    
        parameter-name = parameter-value                              
    another-service-name =                                            
        ... etc                                                       
    """
    # Get the AWS config file path                                    
    profile_name = os.environ.get('AWS_PROFILE', profile_name)
    conf_filepath = os.environ.get('AWS_CONFIG_FILE', '~/.aws/config')
    conf_filepath = os.path.expanduser(conf_filepath)
    # Parse the raw config for this profile                           
    parser = ConfigParser()
    with open(conf_filepath) as f:
        parser.read_file(f)
    cfg = dict(parser).get(profile_name, {})
    # Extract the service-specific config, if any                     
    service_raw_cfg = cfg.get(service_name, '')
    service_cfg = {k: v for k, v in RE.findall(service_raw_cfg)}
    # Load in the service config, on top of other defaults            
    # and let boto3 do the rest                                       
    resource = boto3.resource(service_name=service_name,
                              **service_cfg, **kwargs)
    return resource

which can then be used as (noting that you can pass whatever regular kwargs you normally would):

s3 = boto3_resource('s3')
obj = s3.Object('test-bucket', 'my/key/including/filename.txt')
obj.put(Body='this is some text')

obj = s3.Object('test-bucket', 'my/key/including/filename.txt')
print(obj.get()['Body'].read())
>>> b'this is some text'

Huge +1 for this feature. I left --endpoint off of a command when i was playing around with local DynamoDB tables, and it defaulted to running on the remote prod-like instance.

Incredibly dangerous to not allow a sensible default for working in development. Especially when you copy commands from the AWS guide itself, which don't include --endpoint by default.

I just want to share my approach to deal with problem - hope it help someone.
I moved aws-cli to docker and add --endpoint-url http://dynamodb:8000 to entrypoint in docker-compose.yml:

version: '3.8'

services:
  # usage: docker-compose run --rm awscli dynamodb [operation]
  awscli:
     image: amazon/aws-cli
     entrypoint: aws --endpoint-url http://dynamodb:8000
     command: --version
     environment:
       AWS_ACCESS_KEY_ID: dummy
       AWS_SECRET_ACCESS_KEY: dummy
       AWS_REGION: eu-west-1

+1...need this to work with localstack so I can test S3 sinks in Kafka....Kafka Connect currently has no support for custom AWS urls and defaults to aws credentials file...please allow us to do this.

+1

4 years later, huge demand, zero movement.

+1

4 years later, huge demand, zero movement.

Well, I suspect that this is not a feature that's important to AWS (since it is used to point to non-AWS servers). It's probably something that the Community should code and submit as a Pull Request.

not a feature that's important to AWS

When using VPC endpoints, this is important and that's AWS on both sides. It becomes problematic in airgap environments.

We are investigating using the AWS Snowball Edge devices, and as such we want to use the AWS CLI with the AWS Snowball Device and having to specify --end-point every time is really quite tiresome.

We are investigating using the AWS Snowball Edge devices, and as such we want to use the AWS CLI with the AWS Snowball Device and having to specify --end-point every time is really quite tiresome.

There's an environment variable option for specifying the endpoint, I typically create a script called aws3 and put the following in:

#!/bin/bash
aws --endpoint YOUR_ENDPOINT_URL --profile YOUR_PROFILE s3 $@

Usage:
aws3 ls s3://bucket/voila/that/looks/better

I avoid using alias because you can't use it as easily with piped commands like xargs.

Of course, specified in the profile would be better. But until that days comes, there can still be joy in the world with these little shell-life-hacks.

that is why i haven't migrate to aws-cli v2 yet

I have a slightly different use-case similar to @flickerfly's. Curious to hear if folks think I should file a separate issue.

This comes up when:

  • calling an API in one region from within a VPC that's in another, and
  • assuming a role before-hand by specifying the role_arn, role_session_name and credentials_source, in ~/.aws/config, and
  • the role being assumed has a condition requiring the use of the STS VPC endpont in your VPC

There appears to be no way for me to specify the specific STS endpoint to use in the config file.

Here's some sample code:

~/.aws/config

[default]
region = us-east-1
sts_regional_endpoints = regional
credential_source = EcsContainer
role_arn = arn:aws:iam::11111111:role/my_role
role_session_name = my_session

my_role's trust relationship:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::22222222:role/my-ecs-task"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "aws:SourceVpce": "vpce-01234deadbeef"
        }
      }
    }
  ]
}

Code running in the VPC containing the required STS VPC endpoint

$ aws --region us-west-2 s3 ls

The above errors out because the role can't be assumed.

Filed a new issue with a more targeted ask: #6754

Thank you for posting your feedback here, and our apologies that we’ve been thinking this over for a long time without much forward motion. There are similar requests to implement this feature in a few of the AWS SDKs and the AWS CLI, so in order to coordinate those teams - and hopefully make the discussions a little easier to follow - we’ve created a new issue in aws/aws-sdk here: aws/aws-sdk#229

Hi all,

We recently added a pull request (aws/aws-sdk#230) that contains a proposal based on community comments and suggestions and our own discussions. This document proposes to extend the options for configuring the endpoint to allow users to provide an endpoint URL independently for each AWS service via an environment variable or a profile subsection in the shared configuration file.

You can read the proposal here.

For more information on how to give feedback, please see this comment on the aws/aws-sdk repository:

aws/aws-sdk#229 (comment)

Thanks!

kdaily commented

I'm happy to announce that the ability to configure the endpoint URL via the shared configuration file and environment variables is now available in the AWS CLI v1 and v2! You can now specify the endpoint to use for all service requests through the shared configuration file and environment variables, as well as specify the endpoint URL for individual AWS services.

To start using this feature, install the AWS CLI >=1.29.0 or >=2.13.0.

To read more about this feature, see the documentation page "Service-specific Endpoints" in the AWS SDKs and Tools Reference Guide:

https://docs.aws.amazon.com/sdkref/latest/guide/feature-ss-endpoints.html

Look forward to a blog post demonstrating the use of this feature with the AWS CLI on the AWS Developer Tools Blog!

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.