SageMaker Inference fails on localstack
clayrisser opened this issue · 7 comments
https://github.com/localstack-samples/localstack-pro-samples/tree/master/sagemaker-inference
When I try and run the sagemaker-inference sample I get the following error.
Please provide a model_fn implementation.
See documentation for model_fn at https://github.com/aws/sagemaker-python-sdk
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/sagemaker_inference/transformer.py", line 110, in transform
self.validate_and_initialize(model_dir=model_dir)
File "/opt/conda/lib/python3.6/site-packages/sagemaker_inference/transformer.py", line 158, in validate_and_initialize
self._model = self._model_fn(model_dir)
File "/opt/conda/lib/python3.6/site-packages/sagemaker_pytorch_serving_container/default_inference_handler.py", line 48, in default_model_fn
"""))
NotImplementedError:
Please provide a model_fn implementation.
See documentation for model_fn at https://github.com/aws/sagemaker-python-sdk
Traceback (most recent call last):
File "/Users/clayrisser/Projects/localstack-pro-samples/sagemaker-inference/main.py", line 137, in <module>
run_regular()
File "/Users/clayrisser/Projects/localstack-pro-samples/sagemaker-inference/main.py", line 124, in run_regular
inference_model_boto3(test_run)
File "/Users/clayrisser/Projects/localstack-pro-samples/sagemaker-inference/main.py", line 110, in inference_model_boto3
_show_predictions(json.loads(response["Body"].read().decode("utf-8")))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/clayrisser/.asdf/installs/python/3.11.9/lib/python3.11/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/clayrisser/.asdf/installs/python/3.11.9/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/clayrisser/.asdf/installs/python/3.11.9/lib/python3.11/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
I am running localstack-pro. My token is valid. I ran localstack start
to run localstack pro.
I install all the python dependencies by running pip install -r requirements.txt
I then run the inference by running the command python3 main.py
. That is when I get the error above.
I am using python version Python 3.11.9
I am using localstack version 2.2.0
I am using the latest localstack-pro container
87fce7a1ec9f localstack/localstack-pro:latest "docker-entrypoint.sh" 52 minutes ago Up 52 minutes (healthy) 0.0.0.0:53->53/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:4510-4559->4510-4559/tcp, 0.0.0.0:4566->4566/tcp, 0.0.0.0:53->53/udp, 5678/tcp magical_diffie
I noticed the model artifact seems to be missing from the container /opt/ml/model
folder. I even tried baking the model directly into the /opt/ml/model
folder in the container, but when inference runs, that folder seems to be cleared. I'm not sure if this is related to the issue or not.
I also noticed when I login to the model container I'm not able to access localstack using the aws cli. I'm not sure if that is normal behavior or not.
Here is my localstack config
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Key ┃ Value ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ ALLOW_NONSTANDARD_REGIONS │ False │
│ BUCKET_MARKER_LOCAL │ hot-reload │
│ CFN_RESOURCE_PROVIDER_OVERRIDES │ {} │
│ CFN_VERBOSE_ERRORS │ False │
│ CUSTOM_SSL_CERT_PATH │ │
│ DATA_DIR │ │
│ DEBUG │ False │
│ DEBUG_HANDLER_CHAIN │ False │
│ DEFAULT_REGION │ us-east-1 │
│ DEVELOP │ False │
│ DEVELOP_PORT │ 5678 │
│ DISABLE_CORS_CHECKS │ False │
│ DISABLE_CORS_HEADERS │ False │
│ DISABLE_CUSTOM_CORS_APIGATEWAY │ False │
│ DISABLE_CUSTOM_CORS_S3 │ False │
│ DISABLE_EVENTS │ False │
│ DOCKER_BRIDGE_IP │ 172.17.0.1 │
│ DOCKER_SDK_DEFAULT_TIMEOUT_SECONDS │ 60 │
│ DYNAMODB_ERROR_PROBABILITY │ 0.0 │
│ DYNAMODB_HEAP_SIZE │ 256m │
│ DYNAMODB_READ_ERROR_PROBABILITY │ 0.0 │
│ DYNAMODB_SHARE_DB │ 0 │
│ DYNAMODB_WRITE_ERROR_PROBABILITY │ 0.0 │
│ EAGER_SERVICE_LOADING │ False │
│ EDGE_BIND_HOST │ 127.0.0.1 │
│ EDGE_FORWARD_URL │ │
│ EDGE_PORT │ 4566 │
│ EDGE_PORT_HTTP │ 0 │
│ ENABLE_CONFIG_UPDATES │ False │
│ EXTRA_CORS_ALLOWED_HEADERS │ │
│ EXTRA_CORS_ALLOWED_ORIGINS │ │
│ EXTRA_CORS_EXPOSE_HEADERS │ │
│ GATEWAY_LISTEN │ 127.0.0.1:4566 │
│ HOSTNAME_EXTERNAL │ localhost │
│ HOSTNAME_FROM_LAMBDA │ │
│ KINESIS_ERROR_PROBABILITY │ 0.0 │
│ KINESIS_INITIALIZE_STREAMS │ │
│ KINESIS_MOCK_PERSIST_INTERVAL │ 5s │
│ KINESIS_ON_DEMAND_STREAM_COUNT_LIMIT │ 10 │
│ LAMBDA_CODE_EXTRACT_TIME │ 25 │
│ LAMBDA_CONTAINER_REGISTRY │ lambci/lambda │
│ LAMBDA_DEV_PORT_EXPOSE │ False │
│ LAMBDA_DOCKER_DNS │ │
│ LAMBDA_DOCKER_FLAGS │ │
│ LAMBDA_DOCKER_NETWORK │ │
│ LAMBDA_EXECUTOR │ │
│ LAMBDA_FALLBACK_URL │ │
│ LAMBDA_FORWARD_URL │ │
│ LAMBDA_INIT_BIN_PATH │ None │
│ LAMBDA_INIT_BOOTSTRAP_PATH │ None │
│ LAMBDA_INIT_DEBUG │ False │
│ LAMBDA_INIT_DELVE_PATH │ None │
│ LAMBDA_INIT_DELVE_PORT │ 40000 │
│ LAMBDA_INIT_RELEASE_VERSION │ None │
│ LAMBDA_INIT_USER │ None │
│ LAMBDA_JAVA_OPTS │ │
│ LAMBDA_LIMITS_CODE_SIZE_UNZIPPED │ 262144000 │
│ LAMBDA_LIMITS_CODE_SIZE_ZIPPED │ 52428800 │
│ LAMBDA_LIMITS_CONCURRENT_EXECUTIONS │ 1000 │
│ LAMBDA_LIMITS_CREATE_FUNCTION_REQUEST_SIZE │ 69905067 │
│ LAMBDA_LIMITS_MAX_FUNCTION_ENVVAR_SIZE_BYTES │ 4096 │
│ LAMBDA_LIMITS_MINIMUM_UNRESERVED_CONCURRENCY │ 100 │
│ LAMBDA_LIMITS_TOTAL_CODE_SIZE │ 80530636800 │
│ LAMBDA_REMOTE_DOCKER │ False │
│ LAMBDA_REMOVE_CONTAINERS │ True │
│ LAMBDA_RETRY_BASE_DELAY_SECONDS │ 60 │
│ LAMBDA_RUNTIME_ENVIRONMENT_TIMEOUT │ 10 │
│ LAMBDA_RUNTIME_EXECUTOR │ │
│ LAMBDA_RUNTIME_IMAGE_MAPPING │ │
│ LAMBDA_STAY_OPEN_MODE │ False │
│ LAMBDA_SYNCHRONOUS_CREATE │ False │
│ LAMBDA_TRUNCATE_STDOUT │ 2000 │
│ LEGACY_DOCKER_CLIENT │ False │
│ LEGACY_EDGE_PROXY │ False │
│ LEGACY_SNS_GCM_PUBLISHING │ False │
│ LOCALSTACK_HOST │ localhost.localstack.cloud:4566 │
│ LOCALSTACK_HOSTNAME │ localhost │
│ LS_LOG │ False │
│ MAIN_CONTAINER_NAME │ localstack_main │
│ MAIN_DOCKER_NETWORK │ │
│ OPENSEARCH_ENDPOINT_STRATEGY │ domain │
│ OUTBOUND_HTTPS_PROXY │ │
│ OUTBOUND_HTTP_PROXY │ │
│ PARITY_AWS_ACCESS_KEY_ID │ False │
│ PERSISTENCE │ False │
│ PORTS_CHECK_DOCKER_IMAGE │ │
│ S3_SKIP_KMS_KEY_VALIDATION │ True │
│ S3_SKIP_SIGNATURE_VALIDATION │ True │
│ SKIP_INFRA_DOWNLOADS │ │
│ SKIP_SSL_CERT_DOWNLOAD │ False │
│ SNAPSHOT_FLUSH_INTERVAL │ 15 │
│ SNAPSHOT_LOAD_STRATEGY │ │
│ SNAPSHOT_SAVE_STRATEGY │ │
│ SQS_CLOUDWATCH_METRICS_REPORT_INTERVAL │ 60 │
│ SQS_DELAY_PURGE_RETRY │ False │
│ SQS_DELAY_RECENTLY_DELETED │ False │
│ SQS_DISABLE_CLOUDWATCH_METRICS │ False │
│ SQS_ENDPOINT_STRATEGY │ off │
│ SQS_PORT_EXTERNAL │ 0 │
│ STEPFUNCTIONS_LAMBDA_ENDPOINT │ │
│ SYNCHRONOUS_KINESIS_EVENTS │ True │
│ TF_COMPAT_MODE │ False │
│ USE_SINGLE_REGION │ False │
│ USE_SSL │ False │
│ WAIT_FOR_DEBUGGER │ False │
│ WINDOWS_DOCKER_MOUNT_PREFIX │ /host_mnt │
└──────────────────────────────────────────────┴─────────────────────────────────┘
Here are the logs to the sagemaker inference container.
2024-09-04 20:29:06,066 [INFO ] main com.amazonaws.ml.mms.ModelServer -
MMS Home: /opt/conda/lib/python3.6/site-packages
Current directory: /
Temp directory: /home/model-server/tmp
Number of GPUs: 0
Number of CPUs: 12
Max heap size: 3550 M
Python executable: /opt/conda/bin/python3.6
Config file: /etc/sagemaker-mms.properties
Inference address: http://0.0.0.0:8080/
Management address: http://0.0.0.0:8080/
Model Store: /.sagemaker/mms/models
Initial Models: ALL
Log dir: /logs
Metrics dir: /logs
Netty threads: 0
Netty client threads: 0
Default workers per model: 12
Blacklist Regex: N/A
Maximum Response Size: 6553500
Maximum Request Size: 6553500
Preload model: false
Prefer direct buffer: false
2024-09-04 20:29:06,188 [WARN ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-9000-model
2024-09-04 20:29:06,331 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - model_service_worker started with args: --sock-type unix --sock-name /home/model-server/tmp/.mms.sock.9000 --handler sagemaker_pytorch_serving_container.handler_service --model-path /.sagemaker/mms/models/model --model-name model --preload-model false --tmp-dir /home/model-server/tmp
2024-09-04 20:29:06,332 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Listening on port: /home/model-server/tmp/.mms.sock.9000
2024-09-04 20:29:06,332 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - [PID] 49
2024-09-04 20:29:06,333 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - MMS worker started.
2024-09-04 20:29:06,334 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Python runtime: 3.6.13
2024-09-04 20:29:06,336 [INFO ] main com.amazonaws.ml.mms.wlm.ModelManager - Model model loaded.
2024-09-04 20:29:06,345 [INFO ] main com.amazonaws.ml.mms.ModelServer - Initialize Inference server with: EpollServerSocketChannel.
2024-09-04 20:29:06,352 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000
2024-09-04 20:29:06,352 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000
2024-09-04 20:29:06,352 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000
2024-09-04 20:29:06,352 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000
2024-09-04 20:29:06,352 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000
2024-09-04 20:29:06,352 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000
2024-09-04 20:29:06,353 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000
2024-09-04 20:29:06,352 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000
2024-09-04 20:29:06,352 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000
2024-09-04 20:29:06,352 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000
2024-09-04 20:29:06,353 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000
2024-09-04 20:29:06,353 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000
2024-09-04 20:29:06,564 [INFO ] main com.amazonaws.ml.mms.ModelServer - Inference API bind to: http://0.0.0.0:8080/
Model server started.
2024-09-04 20:29:06,579 [WARN ] pool-2-thread-1 com.amazonaws.ml.mms.metrics.MetricCollector - worker pid is not available yet.
2024-09-04 20:29:06,596 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000.
2024-09-04 20:29:06,596 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000.
2024-09-04 20:29:06,596 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000.
2024-09-04 20:29:06,597 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000.
2024-09-04 20:29:06,597 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000.
2024-09-04 20:29:06,597 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000.
2024-09-04 20:29:06,597 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000.
2024-09-04 20:29:06,597 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000.
2024-09-04 20:29:06,597 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000.
2024-09-04 20:29:06,597 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000.
2024-09-04 20:29:06,597 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000.
2024-09-04 20:29:06,597 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000.
2024-09-04 20:29:07,296 [INFO ] pool-1-thread-14 ACCESS_LOG - /192.168.65.1:19568 "GET /ping HTTP/1.1" 200 54
2024-09-04 20:29:07,951 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model model loaded io_fd=0242acfffe110004-00000019-00000006-9cbd51ae5871ffba-38a08844
2024-09-04 20:29:07,954 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model model loaded io_fd=0242acfffe110004-00000019-0000000c-cfa831ae5871ffba-5c4ab86c
2024-09-04 20:29:07,957 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 991
2024-09-04 20:29:07,959 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 993
2024-09-04 20:29:07,962 [WARN ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-model-5
2024-09-04 20:29:07,962 [WARN ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-model-8
2024-09-04 20:29:07,978 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 4
2024-09-04 20:29:07,979 [INFO ] W-9000-model ACCESS_LOG - /192.168.65.1:50075 "POST /invocations HTTP/1.1" 500 332
2024-09-04 20:29:07,984 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model model loaded io_fd=0242acfffe110004-00000019-00000002-c59791ae5871ffba-b11994f4
2024-09-04 20:29:07,987 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 1020
2024-09-04 20:29:07,988 [WARN ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-model-10
2024-09-04 20:29:08,033 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model model loaded io_fd=0242acfffe110004-00000019-00000004-735251ae5871ffba-4ee7607a
2024-09-04 20:29:08,035 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 1068
2024-09-04 20:29:08,036 [WARN ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-model-4
2024-09-04 20:29:08,042 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model model loaded io_fd=0242acfffe110004-00000019-00000009-25d2d1ae5871ffba-950975e4
2024-09-04 20:29:08,042 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 1075
2024-09-04 20:29:08,042 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model model loaded io_fd=0242acfffe110004-00000019-00000000-bc3791ae5871ffba-656ca361
2024-09-04 20:29:08,043 [WARN ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-model-11
2024-09-04 20:29:08,047 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model model loaded io_fd=0242acfffe110004-00000019-00000007-51af51ae5871ffba-ca3aa756
2024-09-04 20:29:08,047 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 1079
2024-09-04 20:29:08,049 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 1082
2024-09-04 20:29:08,049 [WARN ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-model-9
2024-09-04 20:29:08,051 [WARN ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-model-7
2024-09-04 20:29:08,051 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 1085
2024-09-04 20:29:08,052 [WARN ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-model-12
2024-09-04 20:29:08,061 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model model loaded io_fd=0242acfffe110004-00000019-00000005-7e5e51ae5871ffba-0acf6457
2024-09-04 20:29:08,070 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model model loaded io_fd=0242acfffe110004-00000019-00000008-a098d1ae5871ffba-7324c0db
2024-09-04 20:29:08,075 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 1107
2024-09-04 20:29:08,077 [WARN ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-model-6
2024-09-04 20:29:08,086 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model model loaded io_fd=0242acfffe110004-00000019-00000001-7b5651ae5871ffba-c8a72c08
2024-09-04 20:29:08,088 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 1122
2024-09-04 20:29:08,089 [WARN ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-model-2
2024-09-04 20:29:08,105 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model model loaded io_fd=0242acfffe110004-00000019-00000003-823851ae5871ffba-e3450ec4
2024-09-04 20:29:08,106 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 1139
2024-09-04 20:29:08,107 [WARN ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-model-1
2024-09-04 20:29:08,152 [INFO ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 1186
2024-09-04 20:29:08,153 [WARN ] W-9000-model com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-model-3
2024-09-04 20:29:08,158 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model model loaded io_fd=0242acfffe110004-00000019-0000000b-ec87d1ae5871ffba-e4533c9d
Hi @clayrisser — I can confirm the bug with the latest
LocalStack Docker image. Is running the sample app a priority for you right now? We're planning to expand features for SageMaker soon and could address this issue in that update. Let us know your preference, and we can arrange an immediate fix if needed. :)
It is the main reason we are using localstack pro. It is very urgent for what we're working on. An immediate fix would be appreciated.
@HarshCasper any updates on this?
Hi @clayrisser, and sorry for the wait. I've just implemented a fix for this. I will notify you once this is available in the image.
@silv-io thank you so much. What do I need to do to make it work?
@clayrisser We will inform you once the fix is released. Once done, you just need to pull the latest
Docker image. I will personally test the sample app and let you know once it works :)
Hi @clayrisser, just tried out the sample with the latest Docker image and can confirm that it works.
On your end you just need to docker pull localstack/localstack-pro:latest
and launch your container as before. The sample should work then.