This repo contains the app that was used to collect the data for ORM benchmarks.
You can learn more about the benchmark methodology and results in this blog post: Performance Benchmarks: Comparing Query Latency across TypeScript ORMs & Databases.
Clone the repo, navigate into it and install dependencies:
git clone git@github.com:prisma/orm-benchmarks.git
cd orm-benchmarks
npm install
Set the DATABASE_URL
environment variable to your database connection string in a .env
file.
First, create a .env
file:
touch .env
Then open the .env
file and add the following line:
DATABASE_URL="your-database-url"
For example:
DATABASE_URL="postgresql://user:password@host:port/db"
Alternative: Set the DATABASE_URL
in the terminal
Alternatively, you can set the DATABASE_URL
in the terminal:
export DATABASE_URL="postgresql://user:password@host:port/db"
To create the database and the schema, run the prisma db push
command by pointing it to the schema of your database.
If you use PostgreSQL, run:
npx prisma db push --schema ./prisma-pg/schema.prisma
Note: We may add more databases in the future.
Note for PostgreSQL: Since the data preparation/seeding relies on
pg_dump
andpg_restore
, the PostgreSQL versions of the machine that's executing the script must match the version of the target PostgreSQL server.
sh ./benchmark.sh -i 500 -s 1000
This executes the benchmark scripts with 500 iterations and a sample size of 1000 records per table. See below for the different options you can provide to any benchmark runs.
The results of the benchmark run will be stored in a folder called results/DB-SIZE-ITERATIONS-TIMESTAMP
, e.g. results/postgresql-1000-500-1721027353940
. This folder will have one .csv
file per ORM, e.g.:
results/postgresql-1000-500-1721027353940
├── drizzle.csv
├── prisma.csv
└── typeorm.csv
You can execute the benchmarks by running the benchmark.sh
:
sh ./benchmark.sh [options]
You can provide the following options to the script:
Name | Short | Default | Description | Required |
---|---|---|---|---|
--iterations |
-i |
2 | Number of times to execute the benchmarks | No |
--size |
-s |
50 | Size of the data set (number of records per table) | No |
--database-url |
-d |
n/a | Database connection string | No |
For example:
sh ./benchmark.sh -i 500 -s 1000 --database-url postgresql://user:password@host:port/db
You can turn on two debug setting via the DEBUG
environment variable:
benchmarks:compare-results
: Compare the results at the end of each benchmark run. Note that this approach will consume more memory because the results of all executed queries are collected.
- This repo contains an unfinished MySQL implementation.
- The final results that are published on
benchmarks.prisma.io
are based on the data in./results-website
. - The script in
./src/lib/website-output.ts
is used to generate the JSON structures that are the basis for the result visualisation inbenchmarks.prisma.io
.