./cortex/dev/export_images.sh: line 35: build/images.sh: No such file or directory
bluuewhale opened this issue · 0 comments
bluuewhale commented
Version
0.42.0
Description
export_images.sh
scripts fails with following error message
./cortex/dev/export_images.sh: line 35: build/images.sh: No such file or directory
Configuration
cluster_name: cortex
region: us-east-1
availability_zones: # default: 3 random availability zones in your region, e.g. [us-east-1a, us-east-1b, us-east-1c]
- us-east-1a
- us-east-1b
- us-east-1c
tags: # <string>: <string> map of key/value pairs
service: "cleanup"
node_groups:
- name: ng-cpu-spot # name of the node group
instance_type: m5.large # instance type
min_instances: 1 # minimum number of instances
max_instances: 5 # maximum number of instances
priority: 100 # priority of the node group; the higher the value, the higher the priority [1-100]
instance_volume_size: 50 # disk storage size per instance (GB)
instance_volume_type: gp3 # instance volume type [gp2 | gp3 | io1 | st1 | sc1]
# instance_volume_iops: 3000 # instance volume iops (only applicable to io1/gp3)
# instance_volume_throughput: 125 # instance volume throughput (only applicable to gp3)
spot: true # whether to use spot instances
spot_config:
instance_distribution: [m5a.large, m5d.large, m5n.large, m5ad.large, m5dn.large, m4.large, t3.large, t3a.large, t2.large]
on_demand_base_capacity: 1
on_demand_percentage_above_base_capacity: 0
- name: ng-cpu-on-demand # name of the node group
instance_type: m5.large # instance type
min_instances: 1 # minimum number of instances
max_instances: 5 # maximum number of instances
priority: 1 # priority of the node group; the higher the value, the higher the priority [1-100]
instance_volume_size: 50 # disk storage size per instance (GB)
instance_volume_type: gp3 # instance volume type [gp2 | gp3 | io1 | st1 | sc1]
# instance_volume_iops: 3000 # instance volume iops (only applicable to io1/gp3)
# instance_volume_throughput: 125 # instance volume throughput (only applicable to gp3)
subnet_visibility: private
nat_gateway: single
api_load_balancer_type: nlb
api_load_balancer_scheme: internet-facing
operator_load_balancer_scheme: internet-facing
# to install Cortex in an existing VPC, you can provide a list of subnets for your cluster to use
# subnet_visibility (specified above in this file) must match your subnets' visibility
# this is an advanced feature (not recommended for first-time users) and requires your VPC to be configured correctly; see https://eksctl.io/usage/vpc-networking/#use-existing-vpc-other-custom-configuration
# here is an example:
# subnets:
# - availability_zone: us-west-2a
# subnet_id: subnet-060f3961c876872ae
# - availability_zone: us-west-2b
# subnet_id: subnet-0faed05adf6042ab7
api_load_balancer_cidr_white_list: [0.0.0.0/0]
operator_load_balancer_cidr_white_list: [0.0.0.0/0]
# SSL certificate ARN (only necessary when using a custom domain)
# ssl_certificate_arn:
# list of IAM policies to attach to your Cortex APIs
iam_policy_arns: ["arn:aws:iam::aws:policy/AmazonS3FullAccess"]
# primary CIDR block for the cluster's VPC
vpc_cidr: 192.168.0.0/16
# instance type for prometheus (use an instance with more memory for clusters exceeding 300 nodes or 300 pods)
prometheus_instance_type: "t3.small"
Steps to reproduce
- export CORTEX_VERSION=0.42.0
- git clone --depth 1 --branch v$CORTEX_VERSION https://github.com/cortexlabs/cortex.git
- ./cortex/dev/export_images.sh us-east-1 $AWS_ACCOUNT_ID # (I've set my account id as environment variable)
Expected behavior
pull and push container images to my private ECR
Actual behavior
./cortex/dev/export_images.sh: line 35: build/images.sh: No such file or directory
Screenshots
(optional)
Stack traces
./cortex/dev/export_images.sh us-east-1 $AWS_ACCOUNT_ID
Login Succeeded
Logging in with your password grants your terminal complete access to your account.
For better security, log in with a limited-privilege personal access token. Learn more at https://docs.docker.com/go/access-tokens/
./cortex/dev/export_images.sh: line 35: build/images.sh: No such file or directory
(error output from CloudWatch Insights or from a random pod cortex logs <api name>
)
<paste stack traces here>
Additional context
(optional)
Suggested solution
(optional)