/java-json-benchmark

Performance testing of serialization and deserialization of Java JSON libraries

Primary LanguageJavaMIT LicenseMIT

Java CI

Benchmark of Java JSON libraries

Purpose

This project benchmarks the throughput performance of a variety of Java Json libraries using JMH. It covers the following libraries:

When available, both databinding and 'stream' (custom packing and unpacking) implementations are tested. Two different kinds of models are evaluated with payloads of 1, 10, 100 and 1000 KB size:

  • Users: uses primitive types, String, List and simple POJOs; and
  • Clients: adds arrays, enum, UUID, LocalDate

This benchmark is written to:

  • randomly generate payloads upon static loading of the JVM/benchmark; the seed is also shared across runs
  • read data from RAM
  • write data to reusable output streams (when possible); this reduces allocation pressure
  • consume all output streams; to avoid dead code elimination optimization

Not evaluated are: RAM utilization, compression, payloads > 1 MB.

Results

The benchmarks are written with JMH and for Java 17.

The results here-below were computed on January the 30th, 2024 with the following libraries and versions:

Library Version
avaje-jsonb 1.9
boon 0.34
dsl-json 1.10.0
fastjson 2.0.46
flexjson 3.3
genson 1.6
gson 2.10.1
jackson 2.16.0
jodd json 6.0.3
johnzon 1.2.21
jakarta 2.1.3
json-io 4.24.0
simplejson 1.1.1
json-smart 2.4.11
logansquare 1.3.7
minimal-json 0.9.5
mjson 1.4.1
moshi 1.15.0
nanojson 1.8
org.json 20231013
purejson 1.0.1
qson 1.1.1
tapestry 5.8.3
underscore 1.97
yasson 3.0.3
wast 0.0.12.1

All graphs and sheets are available in this google doc.

Raw JMH results are available here

Users model

Uses: primitive types, String, List and simple POJOs

Deserialization performance json deserialization performance for primitive types, String, List and simple POJOs

Serialization performance json serialization performance for primitive types, String, List and simple POJOs

Clients model

Uses: primitive types, String, List and simple POJOs, arrays, enum, UUID, LocalDate

Note: fewer libraries are tested with this model due to lack of support for some of the evaluated types.

Deserialization performance json deserialization performance for primitive types, String, List and simple POJOs, arrays, enum, UUID, LocalDate

Serialization performance json serialization performance for primitive types, String, List and simple POJOs, arrays, enum, UUID, LocalDate

Benchmark configuration

Tests were run on an Amazon EC2 c5.xlarge (4 vCPU, 8 GiB RAM)

JMH info:

# JMH version: 1.35
# VM version: JDK 17.0.10, OpenJDK 64-Bit Server VM, 17.0.10+7-LTS
# VM invoker: /usr/lib/jvm/java-17-amazon-corretto.x86_64/bin/java
# VM options: -Xms2g -Xmx2g --add-opens=java.base/java.time=ALL-UNNAMED --add-modules=jdk.incubator.vector
# Blackhole mode: compiler (auto-detected, use -Djmh.blackhole.autoDetect=false to disable)
# Warmup: 5 iterations, 10 s each
# Measurement: 10 iterations, 3 s each
# Timeout: 10 min per iteration
# Threads: 16 threads, will synchronize iterations
# Benchmark mode: Throughput, ops/time

Run

Local run

Prerequisites:

  • JDK 17; and JAVA_HOME set.
  • make

By default, running ./run ser (./run deser respectively) will run all -- stream and databind -- serialization (deserialization respectively) benchmarks with 1 KB payloads of Users.

You can also specify which libs, apis, payload-sizes and number of iterations (and more) you want to run. For example:

./run deser --apis stream --libs genson,jackson
./run ser --apis databind,stream --libs jackson
./run deser --apis stream --libs dsljson,jackson --size 10 --datatype users

Type ./run help ser or ./run help deser to print help for those commands.

If you wish to run all benchmarks used to generate the reports above, you can run ./run-everything. This will take several hours to complete, so be patient.

Run on AWS

Prerequisites:

  • JDK 17; and JAVA_HOME set.
  • make
  • packer
  • awscli and configured via aws configure

Then, simply run:

make packer

Contribute

Any help to improve the existing benchmarks or write ones for other libraries is welcome.

Adding a JSON library to the benchmark requires little work and you can find numerous examples in the commit history. For instance:

Pull requests are welcome.