Sample code to populate Aurora Postgres and load test
pipenv install
CREATE TABLE readings (
nmi varchar(10) NOT NULL,
interval_date date NOT NULL,
direction char(1) NOT NULL,
nmi_day_id bigint[] NOT NULL,
quantity decimal NOT NULL,
intervals decimal[] NOT NULL,
PRIMARY KEY (nmi, interval_date, direction),
)
Create a .env
file in the project
root. Add the following keys.
FILE_COUNT=
NMI_PER_FILE=
BUCKET_NAME=
WORKER_THREADS=
DB_HOST=
DB_READONLY_HOST=
DB_USER=
DB_PASSWORD=
DB_PORT=
DB_DATABASE=
LOAD_THREADS=
The program uses aws_s3.table_import_from_s3
to bulk upload data to Aurora Postgres
Follow the setup to enable the extension
and correct permissions
pipenv run python -m generate.load_parallel
Once the tables are loaded. Change the env
variable LOAD_THREADS
to load test the database with varying loads.
pipenv run python -m load_test.load