VSCode disconnects after credentials refresh.
harish-kamath opened this issue · 7 comments
Thank you for the great library! It works fine when I SSH in directly, no issues there. However, if I connect with VSCode, it'll work fine for the most part - until I see a log:
[sagemaker-ssh-helper][sm-setup-ssh][start-ssh] 2024-04-21 07:28:33 INFO [CredentialRefresher] Next credential rotation will be in 29.997197441433332 minutes
Then, the machine will fail within a minute or two of this message, and give the error InternalServerError: We encountered an internal error. Please try again.
in Sagemaker.
Weirdly enough, this is regardless of the instance type, amount of memory, etc. Also - it doesn't always happen the first time that message is sent, so I'm not sure if it's exactly that issue, or something else. Regardless, the machine fails with an internal server error only when I connect with VS Code after some amount of time connected.
Hi, @harish-kamath , very nice that you like this library! I have few questions:
1/ When you say InternalServerError: We encountered an internal error. Please try again
- which log file exactly do you see this message at? Is it in CloudWatch?
2/ Do you know which process generates this message, e.g., is there any prefix in front of this line?
3/ What SageMaker component you are connecting to, e.g., SageMaker Training or Studio, or Inference?
Upon further digging - I'm not sure if it is actually the credentials (only).
I noticed that the ssm agent will first pull AWS credentials from the environment variables - so I tried including explicit AWS access key and secret key in my training job. The logs still show that the credentials are being refreshed, but the machine doesn't actually crash ~1m after the credentials are refreshed anymore. However, now it just crashes otherwise (even if there is nothing running, so no chance that it's a resource issue).
And here is the last cloudwatch log:
(Note that in this case, I did not do sm-wait stop
, but it doesn't actually matter for this error. It occurs even if I do that)
-
There's no prefix or process unfortunately. Since it just crashes the machine, and there's no persistent storage, I'm not sure how I can actually debug after a crash either.
-
Sagemaker Training Jobs
On the bright side, it no longer crashes always after 30 mins of being connected. However, it is still crashing within an hour.
Never mind, just got another crash in <30 minutes.
I'm pretty sure it is still this package, because connecting over plain SSH is still fine and never causes a crash.
Hi, @harish-kamath , apologies for delay, this indeed seems to me very strange. I will have to investigate it further since in the short-running tests it never crashed like that before. In a meantime, is it possible for you to raise the support case from AWS Console? Please, add the link to this issue and mention my name:
https://docs.aws.amazon.com/awssupport/latest/user/case-management.html
@harish-kamath I am using the following manual test to try to reproduce the issue. Without connection from VS Code the job successfully stops in 3 hours without any "Internal Server Error". I have few more asks and questions:
1/ What instance types did you try?
2/ Could you please run the test on ml.g4dn.xlarge
:
sagemaker-ssh-helper/tests/test_manual.py
Lines 23 to 50 in 0d70d02
Status | Start time | End time | Description |
---|---|---|---|
Starting | 5/11/2024, 10:41:50 AM | 5/11/2024, 10:42:32 AM | Preparing the instances for training |
Downloading | 5/11/2024, 10:42:32 AM | 5/11/2024, 10:46:08 AM | Downloading the training image |
Training | 5/11/2024, 10:46:08 AM | 5/11/2024, 1:45:42 PM | Training image download completed. Training in progress. |
Stopping | 5/11/2024, 1:45:42 PM | 5/11/2024, 1:45:43 PM | Stopping the training job |
Uploading | 5/11/2024, 1:45:43 PM | 5/11/2024, 1:45:55 PM | Uploading generated training model |
MaxRuntimeExceeded | 5/11/2024, 1:45:55 PM | 5/11/2024, 1:45:55 PM | Resource released due to keep alive period expiry |
3/ Don't connect with VS Code and wait for some time until credentials will refresh automatically for one or two times. The job should not crash. Then try to connect with VS Code.
So far for me it seems that the issue has nothing to do with Credential refresh, because they are refreshed all the time automatically and this is expected.
4/ How likely is that the VS Code runs some heavy process inside and the instance is running out of RAM?
Could you please check the utilization on the job page in AWS Console? The successful run looks like this:

Make the note at which exact time the credentials were refreshed and what time you've connected with VS Code.
I hope the above steps will help you to localize and isolate the issue down to some process that VS Code starts inside the container.
@harish-kamath I'm closing this issue for now, since I got no reply to my last recommendations. Please, feel free to reopen, if you still have the issues.