/fish-lab

Primary LanguageHTMLMIT LicenseMIT

Prerequisites before deploying to AWS Lambda

  1. Python 3.8 and above
  2. Pip
  3. pipenv
  4. clone this project
  5. navigate to project folder and run '$pipenv shell' (make sure to use the correct pipenv python version!)
  6. next, '$ pipenv install'

Once the package has been installed we can move on to the next step

AWS prerequisite (Make sure you have an account!)

AWS RDS

  1. Create a MySQL DB instance in AWS RDS (t2.micro is more than enough)
  2. Enter any instance identifier
  3. Enter the Database username and password (make sure to remember this!)
  4. Leave everything as default, make sure to set the DB VPC to default for convenience
  5. Next, click the Additional Configuration dropdown, enter an initial database name (remember this as well!)
  6. Leave everything as default and create the database!
  7. Next, navigate to 'Security Groups' under EC2 service and add an inbound rule for MySQL
  8. Just allow every source for now (0.0.0.0) for convenience, then save the rule!

AWS S3

  1. Create a S3 Bucket with any name
  2. Allow public access and enable website hosting.
  3. Under the permission tab, add the following policy to bucket permission (change [your-bucket-name-here] to your bucket name
  4. { "Version":"2012-10-17", "Statement":[{ "Sid":"PublicReadGetObject", "Effect":"Allow", "Principal": "*", "Action":["s3:GetObject"], "Resource":["arn:aws:s3:::[your-bucket-name-here]/*" ] } ] }

Database Migration

Currently looking for a better solution...

  1. Open the ddac_project folder, within the settings.py file, append the database credentials to the DATABASES variable
      1. NAME - the database name
        USER - the database username
        PASSWORD - the database password
        HOST - the database hostname (url)
        PORT - the database port (usually 3306)
  2. After that, navigate to the project folder in the terminal and run pipenv shell (make sure to be in root directory)
  3. Run the following: python manage.py migrate
  4. Create a super user now python manage.py createsuperuser

  5. Once successful, change the database credentials to the following:
      1. NAME - os.environ['NAME']
        USER - os.environ['USER']
        PASSWORD - os.environ['PASSWORD']
        HOST - os.environ['HOST']
        PORT - os.environ['PORT']
  6. Once complete, we can start deployment with Zappa

Deployment

  1. delete the zappa_settings.json file if exists
  2. back in the terminal, run the following zappa init, and follow the wizard
  3. once the wizard is compelete, run the following zappa deploy dev
  4. don't worry about 500/502 error shown by zappa for now, we will fix it
  5. Before fixing the error, grab the API Gateway URL (ex. ******.execute-api.us-east-1.amazonaws.com), append the URL to settings.py ALLOWED_HOSTS
  6. Next, visit Lambda service in the AWS Dashboard, and select the project function.
  7. Under the configuration tab -> environment variables, add the respective environment variables (ex. key: NAME, value: DATABASE_NAME)
  8. once that is settled, go back to the terminal and update the deployment using the following zappa update [zappa-name]

Where is the frontend?

too lazy to write script, so...

copy the whole content within the frontend folder and just dump it to your S3 bucket

one last thing, go to index.js file, change the API_URL to your API Gateway url

finally, just visit your s3 website hosting url and done.