/dynamic-predicate-transfer

This is the repository for our paper submission.

Primary LanguageC++MIT LicenseMIT

Dynamic Predicate Transfer

This repository contains the implementation of Dynamic Predicate Transfer (RPT+), built on top of DuckDB v1.3.0. It provides a customized version of DuckDB. Compared to the original Robust Predicate Transfer (RPT), RPT+ introduces the following key improvements:

  • An asymmetric filter transfer plan to reduce redundant Bloom Filter (BF) construction.
  • A cascade filter mechanism that combines min-max and Bloom filters for hierarchical filtering efficiency.
  • A dynamic pipeline strategy that adapts filter creation and transfer based on runtime selectivity.

Build

You can build this repository in the same way as the original DuckDB. A Makefile wraps the build process. For available build targets and configuration flags, see the DuckDB Build Configuration Guide.

make                   # Build optimized release version
make release           # Same as 'make'
make debug             # Build with debug symbols
GEN=ninja make         # Use Ninja as backend
BUILD_BENCHMARK=1 make # Build with benchmark support

Baselines

Benchmark

Join Order Benchmark (JOB)

DuckDB includes a built-in implementation of the Join Order Benchmark. You can build and run it with:

BUILD_BENCHMARK=1 BUILD_TPCH=1 BUILD_TPCDS=1 BUILD_HTTPFS=1 CORE_EXTENSIONS='tpch' make
build/release/benchmark/benchmark_runner "benchmark/imdb/.*.benchmark" --threads=1

SQLStorm

To run the SQLStorm benchmark:

  1. Clone and set up the benchmark framework from the SQLStorm repository.
  2. Download the StackOverflow Math dataset and load it according to SQLStorm’s setup instructions.
  3. The list of queries that are executable with DuckDB is available here.

Below is the original DuckDB's README.


DuckDB logo

Github Actions Badge discord Latest Release

DuckDB

DuckDB is a high-performance analytical database system. It is designed to be fast, reliable, portable, and easy to use. DuckDB provides a rich SQL dialect, with support far beyond basic SQL. DuckDB supports arbitrary and nested correlated subqueries, window functions, collations, complex types (arrays, structs, maps), and several extensions designed to make SQL easier to use.

DuckDB is available as a standalone CLI application and has clients for Python, R, Java, Wasm, etc., with deep integrations with packages such as pandas and dplyr.

For more information on using DuckDB, please refer to the DuckDB documentation.

Installation

If you want to install DuckDB, please see our installation page for instructions.

Data Import

For CSV files and Parquet files, data import is as simple as referencing the file in the FROM clause:

SELECT * FROM 'myfile.csv';
SELECT * FROM 'myfile.parquet';

Refer to our Data Import section for more information.

SQL Reference

The documentation contains a SQL introduction and reference.

Development

For development, DuckDB requires CMake, Python3 and a C++11 compliant compiler. Run make in the root directory to compile the sources. For development, use make debug to build a non-optimized debug version. You should run make unit and make allunit to verify that your version works properly after making changes. To test performance, you can run BUILD_BENCHMARK=1 BUILD_TPCH=1 make and then perform several standard benchmarks from the root directory by executing ./build/release/benchmark/benchmark_runner. The details of benchmarks are in our Benchmark Guide.

Please also refer to our Build Guide and Contribution Guide.

Support

See the Support Options page.