Constructed by
- Serverside:
- Python/Flask2 on Lambda + APIGateway
- DynamoDB
- Deploy by Terraform and Serverless Framework ver3
- Frontend: VueJS3 + TypeScript + Tailwind CSS
- Deploy by Vite
You need below
- common
- aws-cli >= 1.27.X
- Terraform >= 1.4.6
- serverless
- nodeJS >= 18.16.X
- Python >= 3.10.X
- frontend
- nodeJS >= 18.16.X
Install serverless, python venv and terraform on mac
# At project root dir
cd (project_root/)serverless
npm install
python -m venv .venv
brew install tfenv
tfenv install 1.4.6
tfenv use 1.4.6
Install npm packages
# At project root dir
cd (project_root/)serverless
npm install
Install python packages
. .venv/bin/activate
pip install -r requirements.txt
Create S3 Buckets like below in ap-northeast-1 region
- your-serverless-deployment
- Store deployment state files by terraformand and serverless framework
- Create directory "terraform/your-project-name"
- your-serverless-configs
- Store config files for app
- Create directory "your-project-name/frontend/prd" and "your-project-name/frontend/dev"
Copy sample file and edit variables for your env
cd (project_root_dir)/terraform
cp terraform.tfvars.sample terraform.tfvars
vi terraform.tfvars
prj_prefix = "your-porject-name"
...
route53_zone_id = "Set your route53 zone id"
domain_api_prd = "your-domain-api.example.com"
domain_api_dev = "your-domain-api-dev.example.com"
domain_static_site_prd = "your-domain-static.example.com"
domain_static_site_dev = "your-domain-static-dev.example.com"
domain_media_site_prd = "your-domain-media.example.com"
domain_media_site_dev = "your-domain-media-dev.example.com"
export AWS_SDK_LOAD_CONFIG=1
export AWS_PROFILE=your-aws-profile-name
export AWS_REGION="ap-northeast-1"
Command Example to init
terraform init -backend-config="bucket=your-deployment" -backend-config="key=terraform/your-project/terraform.tfstate" -backend-config="region=ap-northeast-1" -backend-config="profile=your-aws-profile-name"
terraform apply -auto-approve -var-file=./terraform.tfvars
Create Admin User by aws-cli
export AWS_SDK_LOAD_CONFIG=1
export AWS_PROFILE=your-aws-profile-name
export AWS_DEFAULT_REGION="ap-northeast-1"
aws cognito-idp admin-create-user \
--user-pool-id ap-northeast-1_xxxxxxxxx \
--username your-username \
--user-attributes \
Name=email,Value=sample@example.com \
Name=email_verified,Value=True \
Name=custom:role,Value=admin \
Name=custom:acceptServiceIds,Value=hoge \
--desired-delivery-mediums EMAIL
You get temporary password by email Update password as parmanent
aws cognito-idp admin-set-user-password \
--user-pool-id ap-northeast-1_xxxxxxxxx \
--username your-username \
--password 'your-parmanent-password' \
--permanent
- Access to S3 console of media file bucket
- Select tab "Permission"
- Press "Edit" button of "Cross-origin resource sharing (CORS)"
- Set bellow
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE",
"GET"
],
"AllowedOrigins": [
"https://your-domain.example.com"
],
"ExposeHeaders": []
}
]
If you want to backup DynamoDB items, set bellows
- Access to "AWS Backup" on AWS Console and set region
- Press "Create backup plan"
- Input as follows for "Plan"
- Start options
- Select "Build a new plan"
- Backup plan name: your-project-dynamodb-backup
- Backup rule configuration
- Backup vault: Default
- Backup rule name: your-project-dynamodb-backup-rule
- Backup frequency: Daily
- Backup window: Customize backup window
- Backup window settings: as you like
- Press "Create backup plan"
- Start options
- Input as follows for "Assign resources"
- General
- Resource assignment name: your-project-dynamodb-backup-assignment
- IAM role: Default role
- Resource selection
-
- Define resource selection: Include specific resource types
-
- Select specific resource types: DynamoDB
- Table names: All tables
-
- Refine selection using tags
- Key: backup
- Condition for value: Eauqls
- Value: aws-backup
-
- Press "Assign resources"
- General
Setup config files per stage
cd (project_root/)serverless
cp -r config/stages-sample config/stages
vi config/stages/*
# config/stages/common.yml
service: 'your-project-name'
awsAccountId: 'your-aws-acconnt-id'
defaultRegion: 'ap-northeast-1'
deploymentBucketName: 'your-serverless-deployment'
...
# config/stages/prd.yml
# config/stages/dev.yml
domainName: your-domain-api.example.com
corsAcceptOrigins: 'https://your-domain.example.com'
notificationEmail: admin@example.com
...
vpc:
securityGroupIds: your-security-group-id
subnetIds: your-subnet-id
...
associateWafName: your-waf-name # If need to use WAF, set existing name of WebACL. If set not existing name, ignore this.
...
mediaS3BucketName: ""
media:
s3BucketName: "your-domain-media.example.com"
...
cognito:
region: 'ap-northeast-1'
userpoolId: 'ap-northeast-1_*********'
appClientId: '**************************'
...
Execute below command
cd (project_root/)serverless
export AWS_SDK_LOAD_CONFIG=1
export AWS_PROFILE="your-profile-name"
export AWS_REGION="ap-northeast-1"
sls create_domain # Deploy for dev
If deploy for prod
sls create_domain --stage prd # Deploy for prod
Execute below command
cd (project_root/)serverless
export AWS_SDK_LOAD_CONFIG=1
export AWS_PROFILE="your-profile-name"
export AWS_REGION="ap-northeast-1"
sls deploy # Deploy for dev
If deploy for prod
sls deploy --stage prd # Deploy for prod
- Access to https://github.com/{your-account}/{repository-name}/settings/secrets/actions
- Push "New repository secret"
- Add Below
- Common
- AWS_ACCESS_KEY_ID : your-aws-access_key
- AWS_SECRET_ACCESS_KEY : your-aws-secret_key
- For Production
- CLOUDFRONT_DISTRIBUTION : your cloudfront distribution created by terraform for production
- S3_CONFIG_BUCKET: "your-serverles-configs/your-project/frontend/prd" for production
- S3_RESOURCE_BUCKET: "your-domain-static-site.example.com" for production
- For Develop
- CLOUDFRONT_DISTRIBUTION_DEV : your cloudfront distribution created by terraform for develop
- S3_CONFIG_BUCKET_DEV: "your-serverles-configs/your-project/frontend/dev" for develop
- S3_RESOURCE_BUCKET_DEV: "your-domain-static-site-dev.example.com" for develop
- Common
cd (project_root_dir/)frontend
cp src/config/config.json.sample src/config/config.json
vi src/config/config.json
{
...
"api": {
"origin": "https://api.example.com",
"basePath": "/api"
},
"media": {
"url": "https://media.example.com"
},
...
}
cp src/configs/cognito-client-config.json.sample src/configs/cognito-client-config.json
vi src/configs/cognito-client-config.json
{
"Region": "ap-northeast-1",
"UserPoolId": "ap-northeast-1_xxxxxxxxx",
"ClientId": "xxxxxxxxxxxxxxxxxxxxxxxxxx",
"IdentityPoolId": "ap-northeast-1:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
cp src/configs/cognito-client-config.json.sample src/configs/firebase-app-sdk-config.json.sample
vi src/configs/firebase-app-sdk-config.json
{
"apiKey": "******************",
"authDomain": "sample-service.firebaseapp.com",
"databaseURL": "https://sample-service.firebaseio.com",
"projectId": "sample-service",
"storageBucket": "sample-service.appspot.com",
"messagingSenderId": "******************",
"appId": "******************",
"measurementId": "********"
}
- Access to TinyMCE Dashbord
- Get Your Tiny API Key
- Move to Approved Domains, then Add your static-site domain
Install packages for development
cd (project_root/)serverless
. .venv/bin/activate
pip install pylint
Set venv
cd (project_root/)serverless
. .venv/bin/activate
Create Docker container only for the first time
cd (project_root/)serverless
docker-compose build
Start DynamoDB Local on Docker
cd (project_root)/serverless/develop/
docker-compose up -d
DynamomDB setup
cd (project_root/)serverless
sls dynamodb start
Execute below command
cd (project_root/)serverless
sls wsgi serve
Request http://127.0.0.1:5000
If you want to stop DynamoDB Local on Docker
cd (project_root)/serverless/develop/
docker-compose stop
cd (project_root/)serverless
sls invoke local --function funcName --data param
Install packages for converter if use MySQL for convert target service
cd (project_root/)serverless
. .venv/bin/activate
pip install PyMySQL
Set converter of target service
cd (root/)serverless/develop/db_converter/services/
git clone {repository url of target service converter}
Execute converter
cd (root/)serverless/develop/db_converter
python main.py {service_name}
Install for macOS
brew install k6
k6 run ./dev_tools/performance/vote.js --vus NN --duration MMs
Destroy for serverless resources
cd (project_root/)serverless
sls remove --stage Target-stage
sls delete_domain --stage Target-stage
Removed files in S3 Buckets named "your-domain.example.com-cloudfront-logs" and "your-domain.example.com"
Destroy for static server resources by Terraform
cd (project_root/)terraform
terraform destroy -auto-approve -var-file=./terraform.tfvars