efsync2 is an updated fork from Philipp Schmid's efsync tool. I noticed that there were errors rendering the code non-functional for deployment, as specified in This issue. Fortunately, Chi W Pak generated a fix and a Pull Request which would fix it! Unfortunately, as of this writing, these issues and PR have been outstanding for over six months. I wanted to share deployable serverless machine learning inferences running purely on Lambda, and wanted to include functioning packages and code in the instructions. So, I implemented Chi's fix into a fork of Philipp's code, and am releasing it as its own package for future use -- efsync2!
The vast majority of work I attribute to Philipp, and I keep the license the same. I wanted to keep as much functionality as possible the same, with identical function calls from his work, in case anyone was looking to
efsync2 is a CLI/SDK tool, which automatically syncs files and dependencies to AWS EFS. The CLI is easy to use, you only need access to an AWS Account, an AWS EFS-filesystem up and running. Philipp wrote an article on the original efsync here, which currently covers the same function calls and functionality.
I recommend starting with Quick Start. Efsync2 enables you to install dependencies with the AWS Lambda runtime directly into your EFS filesystem and use them in your AWS Lambda function. It enables you either combine this with syncing files from S3 or uploading them with SCP. You can also sync files from S3 and upload with SCP without installing Pip dependencies.
There are several examples for many usecases.
#Install via pip3:
pip3 install efsync2
#Sync your pip packages or files to AWS EFS:
efsync2 -cf efsync.yaml
- Install via pip3
pip3 install efsync2
- sync your pip dependencies or files to AWS EFS
usage with the cli
efsync2 -cf efsync.yaml
or with python
from efsync2 import efsync
efsync('efsync.yaml')
There are 4 different ways to use efsync2 in your project. You can create a yaml
configuration and use the SDK, you can create a python dict
and use the SDK, you can create a yaml
configuration and use the CLI, or you can use the CLI with parameters. Below you can find examples for each of these. I also included afterwards configuration examples for the different use cases.
Note: If you sync file with scp from local directory (e.g. model/bert) to efs (my_efs_model) efsync will sync the model to (my_efs_model/bert) that happens because scp uploads the files recursively.
#standard configuration
efs_filesystem_id: fs-2adfas123 # aws efs filesystem id (moint point)
subnet_Id: subnet-xxx # subnet of which the efs is running in
ec2_key_name: efsync2-asd913fjgq3 # required key name for starting the ec2 instance
clean_efs: all # Defines if the EFS should be cleaned up before. values: `'all'`,`'pip'`,`'file'` uploading
# aws profile configuration
aws_profile: efsync2 # aws iam profile with required permission configured in .aws/credentials
aws_region: eu-central-1 # the aws region where the efs is running
# pip dependencies configurations
efs_pip_dir: lib # pip directory on ec2
python_version: 3.8 # python version used for installing pip dependencies -> should be used as lambda runtime afterwads
requirements: requirements.txt # path + file to requirements.txt which holds the installable pip dependencies
# s3 config
s3_bucket: my-bucket-with-files # s3 bucket name from files should be downloaded
s3_keyprefix: models/bert # s3 keyprefix for the files
file_dir_on_ec2: ml # name of the directory where your file from <file_dir> will be uploaded, if you use scp it will it will be /file_dir
# upload files with scp to efs
file_dir: local_dir # extra local directory for file upload like ML models
from efsync2 import efsync
efsync('efsync.yaml')
efsync2 --efs_filesystem_id fs-2adfas123 \
--subnet_Id subnet-xxx \
--ec2_key_name efsync2-asd913fjgq3 \
--clean_efs all \
--aws_profile efsync2 \
--aws_region yo-region-1 \
--efs_pip_dir lib \
--python_version 3.8 \
--requirements requirements.txt \
--s3_bucket my-bucket-with-files \
--s3_keyprefix models/bert \
--file_dir local_dir \
--file_dir_on_ec2 ml
efsync2 -cf efsync.yaml
config = {
'efs_filesystem_id': 'fs-2adfas123', # aws efs filesystem id (moint point)
'subnet_Id': 'subnet-xxx', # subnet of which the efs is running in
'ec2_key_name':'efsync2-asd913fjgq3', # required key name for starting the ec2 instance
'clean_efs': 'all', # Defines if the EFS should be cleaned up before. values: `'all'`,`'pip'`,`'file'` uploading
'aws_profile': 'efsync2', # aws iam profile with required permission configured in .aws/credentials
'aws_region': 'eu-central-1', # the aws region where the efs is running
'efs_pip_dir': 'lib', # pip directory on ec2
'python_version': 3.8, # python version used for installing pip dependencies -> should be used as lambda runtime afterwads
'requirements': 'requirements.txt', # path + file to requirements.txt which holds the installable pip dependencies
'file_dir': 'local_dir', # extra local directory for file upload like ML models
'file_dir_on_ec2': 'ml', # name of the directory where your file from <file_dir> will be uploaded, if you use scp it will it will be /file_dir
's3_bucket': 'my-bucket-with-files', # s3 bucket name from files should be downloaded
's3_keyprefix': 'models/bert' # s3 keyprefix for the files
}
from efsync2 import efsync
efsync(config)
#standard configuration
efs_filesystem_id: fs-2adfas123 # aws efs filesystem id (moint point)
subnet_Id: subnet-xxx # subnet of which the efs is running in
ec2_key_name: efsync2-asd913fjgq3 # required key name for starting the ec2 instance
clean_efs: all # Defines if the EFS should be cleaned up before. values: `'all'`,`'pip'`,`'file'` uploading
# aws profile configuration
aws_profile: efsync2 # aws iam profile with required permission configured in .aws/credentials
aws_region: eu-central-1 # the aws region where the efs is running
# pip dependencies configurations
efs_pip_dir: lib # pip directory on ec2
python_version: 3.8 # python version used for installing pip dependencies -> should be used as lambda runtime afterwads
requirements: requirements.txt # path + file to requirements.txt which holds the installable pip dependencies
#standard configuration
efs_filesystem_id: fs-2226b27a # aws efs filesystem id (moint point)
subnet_Id: subnet-17f97a7d # subnet of which the efs is running in
ec2_key_name: efsync2-asd913fjgq3 # required key name for starting the ec2 instance
clean_efs: all # Defines if the EFS should be cleaned up before. values: `'all'`,`'pip'`,`'file'` uploading
# aws profile configuration
aws_profile: efsync2 # aws iam profile with required permission configured in .aws/credentials
aws_region: eu-central-1 # the aws region where the efs is running
# pip dependencies configurations
efs_pip_dir: lib # pip directory on ec2
python_version: 3.8 # python version used for installing pip dependencies -> should be used as lambda runtime afterwads
requirements: requirements.txt # path + file to requirements.txt which holds the installable pip dependencies
# s3 config
s3_bucket: efsync2-test-bucket # s3 bucket name from files should be downloaded
s3_keyprefix: distilbert # s3 keyprefix for the files
file_dir_on_ec2: ml # name of the directory where your file from <file_dir> will be uploaded, if you use scp it will it will be /file_dir
#standard configuration
efs_filesystem_id: fs-2226b27a # aws efs filesystem id (moint point)
subnet_Id: subnet-17f97a7d # subnet of which the efs is running in
ec2_key_name: efsync2-asd913fjgq3 # required key name for starting the ec2 instance
clean_efs: all # Defines if the EFS should be cleaned up before. values: `'all'`,`'pip'`,`'file'` uploading
# aws profile configuration
aws_profile: efsync2 # aws iam profile with required permission configured in .aws/credentials
aws_region: eu-central-1 # the aws region where the efs is running
# s3 config
s3_bucket: efsync2-test-bucket # s3 bucket name from files should be downloaded
s3_keyprefix: distilbert # s3 keyprefix for the files
file_dir_on_ec2: ml # name of the directory where your file from <file_dir> will be uploaded, if you use scp it will it will be /file_dir
Note: If you sync a file with scp from local directory (e.g. model/bert) to efs (my_efs_model) efsync2 will sync the model to (my_efs_model/bert). This is due to scp's recursive copy functionality.
For example, if you set the destination path on your efs to be foldername
and you are uploading foldername
, the true path will be your/efs/mount/foldername/foldername
when you wish to access your models.
If confused, I recommending double checking with os.walk
. I am considering modifying this functionality in the future.
#standard configuration
efs_filesystem_id: fs-2226b27a # aws efs filesystem id (moint point)
subnet_Id: subnet-17f97a7d # subnet of which the efs is running in
ec2_key_name: efsync2-asd913fjgq3 # required key name for starting the ec2 instance
clean_efs: all # Defines if the EFS should be cleaned up before. values: `'all'`,`'pip'`,`'file'` uploading
# aws profile configuration
aws_profile: efsync2 # aws iam profile with required permission configured in .aws/credentials
aws_region: eu-central-1 # the aws region where the efs is running
# upload files with scp to efs
file_dir: local_dir # extra local directory for file upload like ML models
file_dir_on_ec2: ml # name of the directory where your file from <file_dir> will be uploaded, if you use scp it will it will be /file_dir
Note: If you sync a file with scp from local directory (e.g. model/bert) to efs (my_efs_model) efsync2 will sync the model to (my_efs_model/bert). This is due to scp's recursive copy functionality.
For example, if you set the destination path on your efs to be foldername
and you are uploading foldername
, the true path will be your/efs/mount/foldername/foldername
when you wish to access your models.
If confused, I recommending double checking with os.walk
. I am considering modifying this functionality in the future.
#standard configuration
efs_filesystem_id: fs-2226b27a # aws efs filesystem id (moint point)
subnet_Id: subnet-17f97a7d # subnet of which the efs is running in
ec2_key_name: efsync2-asd913fjgq3 # required key name for starting the ec2 instance
clean_efs: all # Defines if the EFS should be cleaned up before. values: `'all'`,`'pip'`,`'file'` uploading
# aws profile configuration
aws_profile: efsync2 # aws iam profile with required permission configured in .aws/credentials
aws_region: eu-central-1 # the aws region where the efs is running
# pip dependencies configurations
efs_pip_dir: lib # pip directory on ec2
python_version: 3.8 # python version used for installing pip dependencies -> should be used as lambda runtime afterwads
requirements: requirements.txt # path + file to requirements.txt which holds the installable pip dependencies
# upload files with scp to efs
file_dir: local_dir # extra local directory for file upload like ML models
file_dir_on_ec2: ml # name of the directory where your file from <file_dir> will be uploaded, if you use scp it will it will be /file_dir
There are several jupyter notebooks with examples, including installing pip dependencies only, installing pip dependencies and syncing files from s3 to efs, downloading only files from s3, and installing pip dependencies and uploading files from local with scp and only uploading files with scp. All examples can be run in a Google Colab Notebook.
- installing pip dependencies
- installing pip dependencies and syncing files from s3 to efs
- installing pip dependencies and uploading local files with scp
- syncing files from s3 to efs
- uploading local files with scp
simplest usage:
from efsync2 import efsync
efsync('efsync.yaml')
cli_short | cli_long | default | description |
---|---|---|---|
-h | --help | - | displays all commands |
-r | --requirements | requirements.txt | path of your requirements.txt |
-cf | --config_file | - | path of your efsync.yaml |
-py | --python_version | 3.8 | Python version used to install dependencies |
-epd | --efs_pip_dir | lib | directory where the pip dependencies will be installed on efs |
-efi | --efs_filesystem_id | - | File System ID from the EFS filesystem |
-ce | --clean_efs | - | Defines if the EFS should be cleaned up before. values: 'all' ,'pip' ,'file' uploading |
-fd | --file_dir | tmp | directory where all other files will be placed |
-fdoe | --file_dir_on_ec2 | tmp | name of the directory where your file from <file_dir> will be uploaded, if you use scp it will it will be /file_dir |
-ap | --aws_profile | efsync | name of the used AWS profile |
-ar | --aws_region | eu-central-1 | aws region where the efs is running |
-sbd | --subnet_Id | - | subnet id of the efs |
-ekn | --ec2_key_name | - | temporary key name for the ec2 instance |
-s3b | --s3_bucket | - | s3 bucket name from where the files will be downloaded instance |
-s3k | --s3_keyprefix | - | s3 keyprefix of the directory in s3. Files will be downloaded recursively |
If you want to contribute be sure to review the contributions guidelines.
A copy of the License is provided in the LICENSE file in this repository.