Pipelined Relational Query Language, pronounced "Prequel".
PRQL is a modern language for transforming data — a simpler and more powerful SQL. Like SQL, it's readable, explicit and declarative. Unlike SQL, it forms a logical pipeline of transformations, and supports abstractions such as variables and functions. It can be used with any database that uses SQL, since it transpiles to SQL.
PRQL was discussed on Hacker News and Lobsters.
Here's a fairly simple SQL query:
SELECT TOP 20
title,
country,
AVG(salary) AS average_salary,
SUM(salary) AS sum_salary,
AVG(salary + payroll_tax) AS average_gross_salary,
SUM(salary + payroll_tax) AS sum_gross_salary,
AVG(salary + payroll_tax + benefits_cost) AS average_gross_cost,
SUM(salary + payroll_tax + benefits_cost) AS sum_gross_cost,
COUNT(*) as count
FROM employees
WHERE salary + payroll_tax + benefits_cost > 0 AND country = 'USA'
GROUP BY title, country
ORDER BY sum_gross_cost
HAVING count > 200
Even this simple query demonstrates some of the problems with SQL's lack of abstractions:
- Unnecessary repetition — the calculations for each measure are repeated,
despite deriving from a previous measure. The repetition in the
WHERE
clause obfuscates the meaning of the expression. - Functions have multiple operators —
HAVING
&WHERE
are fundamentally similar operations applied at different stages of the pipeline but SQL's lack of pipeline-based precedence requires it to have two different operators. - Operators have multiple functions — the
SELECT
operator both creates new aggregations, and selects which columns to include. - Awkward syntax — when developing the query, commenting out the final line of
the
SELECT
list causes a syntax error because of how commas are handled, and we need to repeat the columns in theGROUP BY
clause in theSELECT
list.
Here's the same query with PRQL:
from employees
filter country = "USA" # Each line transforms the previous result.
derive [ # This adds columns / variables.
gross_salary: salary + payroll_tax,
gross_cost: gross_salary + benefits_cost # Variables can use other variables.
]
filter gross_cost > 0
aggregate by:[title, country] [ # `by` are the columns to group by.
average salary, # These are aggregation calcs run on each group.
sum salary,
average gross_salary,
sum gross_salary,
average gross_cost,
sum_gross_cost: sum gross_cost,
count,
]
sort sum_gross_cost
filter count > 200
take 20
As well as using variables to reduce unnecessary repetition, the query is also
more readable — it flows from top to bottom, each line representing a
transformation of the previous line's result. For example, TOP 20
/ take 20
modify the final result in both queries — but only PRQL represents it as the
final transformation. And context is localized — the aggregate
function
contains both the calculations and the columns to group by.
While PRQL is designed for reading & writing by people, it's also much simpler
for code to construct or edit PRQL queries. In SQL, adding a filter to a query
involves parsing the query to find and then modify the WHERE
statement, or
wrapping the existing query in a CTE. In PRQL, adding a filter just involves
adding a filter
transformation to the final line.
Here's another SQL query, which calculates returns from prices on days with valid prices.
SELECT
date,
-- Can't use a `WHERE` clause, as it would affect the row that the `LAG` function referenced.
IF(is_valid_price, price_adjusted / LAG(price_adjusted, 1) OVER
(PARTITION BY sec_id ORDER BY date) - 1 + dividend_return, NULL) AS return_total,
IF(is_valid_price, price_adjusted_usd / LAG(price_adjusted_usd, 1) OVER
(PARTITION BY sec_id ORDER BY date) - 1 + dividend_return, NULL) AS return_usd,
IF(is_valid_price, price_adjusted / LAG(price_adjusted, 1) OVER
(PARTITION BY sec_id ORDER BY date) - 1 + dividend_return, NULL)
- interest_rate / 252 AS return_excess,
IF(is_valid_price, price_adjusted_usd / LAG(price_adjusted_usd, 1) OVER
(PARTITION BY sec_id ORDER BY date) - 1 + dividend_return, NULL)
- interest_rate / 252 AS return_usd_excess
FROM prices
This might seem like a convoluted example, but it's taken from a real query. Indeed, it's also simpler and smaller than the full logic — note that it starts from
price_adjusted
, whose logic had to be split into a previous query to avoid the SQL becoming even less readable.
Here's the same query with PRQL:
prql version:0.1 db:snowflake # Version number & database name.
func lag_day x = (
window x
by sec_id
sort date
lag 1
)
func ret x = x / (x | lag_day) - 1 + dividend_return
func excess x = (x - interest_rate) / 252
func if_valid x = is_valid_price ? x : null
from prices
derive [
return_total: prices_adj | ret | if_valid # `|` can be used rather than newlines.
return_usd: prices_usd | ret | if_valid
return_excess: return_total | excess
return_usd_excess: return_usd | excess
]
select [
date,
sec_id,
return_total,
return_usd,
return_excess,
return_usd_excess,
]
Because we define the functions once rather than copying & pasting the code, we get all the benefits of encapsulation and extensibility — we can have reliable & tested functions, whose purpose is explicit, which we can share across queries and colleagues.
PRQL is intended to be a modern, simple, declarative language for transforming data, with abstractions such as variables & functions. It's intended to replace SQL, but doesn't have ambitions as a general-purpose programming language. While it's at a pre-alpha stage, it has some immutable principles:
- Pipelined — PRQL is a linear pipeline of transformations — each line of the query is a transformation of the previous line's result. This makes it easy to read, and simple to write. This is also known as "point-free style".
- Simple — PRQL serves both sophisticated engineers and analysts without coding experience. By providing simple, clean abstractions, the language can be both powerful and easy to use.
- Compatible — PRQL transpiles to SQL, so it can be used with any database that uses SQL, and with any existing tools or programming languages that manage SQL. PRQL should allow for a gradual onramp — it should be practical to mix SQL into a PRQL query where PRQL doesn't yet have an implementation. Where possible PRQL can unify syntax across databases.
- Analytical — PRQL's focus is analytical queries; we de-emphasize other SQL features such as inserting data or transactions.
- Extensible — PRQL can be extended through its abstractions, and can evolve without breaking backward-compatibility, because its queries can specify their PRQL version.
I'm excited and inspired by the level of enthusiasm behind the project. Many projects have fallen at this stage, though, so I'm hoping we can demonstrate viability by building things that are within our immediate reach, before potentially expanding.
Here's an initial plan — feedback welcome:
Already since becoming public, the language has improved dramatically, thanks to the feedback of dozens of contributors. The current state of the basics is now stable (at least I don't expect the examples in the README to change much). While we're going to hit corner-cases, we're in a good enough place to start writing the code.
I'd encourage continued discussion on Issues, and examples to be added to the examples path.
We need to parse the PRQL into an AST. We're planning to use pest for this given how quickly we can get started.
This is the area that needs the most immediate help. If anyone is looking for an area to contribute, check out #26 — thank you!
I'm broadly envisioning two passes:
- Materialize the query: replace variables with their definition, run functions,
etc.
- My initial guess is that replacing variables with their definition (e.g. the first example query) will be much easier than the running functions, even though functions are currently fairly limited in complexity.
- Partial progress here would still be a success and let us proceed.
- Write the query to SQL.
We'll need to make initial progress on the parser before starting here.
As well as a command-line tool that transpiles queries, we'd like to make developing in PRQL a wonderful experience, where it feels like it's on your side:
- Syntax highlighting in editors
- A live transpiler in a browser
- Initial type-inference, where it's possible without connecting to the DB, e.g. #54.
- (I'm sure there's more, ideas welcome)
One benefit of PRQL over SQL is that auto-complete, type-inference, and error checking become much more powerful.
This is much harder to build though, since it requires a connection to the database in order to understand the schema of the table. So this would come after having a working transpiler.
We should focus on solving a distinct problem really well. PRQL's goal is to make reading and writing analytical queries easier, and so for the moment that means putting some things out of scope:
- Building infrastructure outside of queries, like lineage. dbt is excellent at that! (issue).
- Writing DDL / index / schema manipulation / inserting data (issue).
- Add typing into the syntax (issue) (though type inference is a goal above, and this could be a useful extension at some point).
If you're interested in the ideas here and would like to contribute towards them being explored:
- Star this repo.
- Send the repo to a couple of people whose opinion you respect.
- Subscribe to Issue #1 for updates.
- Join the Discord.
- Contribute towards building the compiler #26.
- Contribute towards refining the language: PR an example into the
examples:
- An analytical SQL query that's awkward and we could use as a case for translating to PRQL. If you'd like to add a suggestion of the equivalent PRQL, that's very welcome too.
- An area that isn't sufficiently discussed in the existing proposal, or would be difficult to express in PRQL.
Any of these will inspire others to spend more time developing this; thank you in advance.
- dplyr is a beautiful language for manipulating
data, in R. It's very similar to PRQL. It only works on in-memory R data.
- There's also dbplyr which compiles a subset of dplyr to SQL, though requires an R runtime.
- Kusto
is also a beautiful pipelined language, very similar to PRQL. But it can only
use Kusto-compatible DBs.
- A Kusto-to-SQL transpiler would be a legitimate alternative to PRQL, though there would be some impedance mismatch in some areas. My central criticism of Kusto is that it gives up broad compatibility without getting that much in return.
- Against SQL gives a fairly complete description of SQL's weaknesses, both for analytical and transactional queries. @jamii consistently writes insightful pieces, and it's worth sponsoring him for his updates.
- Julia's DataPipes.jl & Chain.jl, which demonstrate how effective point-free pipelines can be, and how line-breaks can work as pipes.
- OCaml's elegant and simple syntax.
- Ecto is a sophisticated ORM library in Elixir which has pipelined queries as well as more traditional ORM features.
- Morel is a functional language for data, also with a pipeline concept. It doesn't compile to SQL but states that it can access external data.
- Malloy from Looker & @lloydtabb in a new language which combines a declarative syntax for querying with a modelling layer.
- FunSQL.jl is a library in Julia which compiles a nice query syntax to SQL. It requires a Julia runtime.
- LINQ,
is a pipelined language for the
.NET
ecosystem which can (mostly) compile to SQL. It was one of the first languages to take this approach. - Sift is an experimental language which heavily uses pipes and relational algebra.
- After writing this proposal (including the name!), I found Preql. Despite the similar name and compiling to SQL, it seems to focus more on making the language python-like, which is very different to this proposal.
If any of these descriptions can be improved, please feel free to PR changes.
-
Joins are implemented as
{join_type} {table} {[conditions]}
. For example:from employees left_join positions [id=employee_id]
...is equivalent to...
SELECT * FROM employees LEFT JOIN positions ON id = employee_id
-
Possibly we could shorten
[id=id]
toid
, and use SQL'sUSING
, but it may be ambiguous with usingid
as a boolean column.
-
Functions can take two disjoint types of arguments:
- Positional arguments, which are required.
- Named arguments, which are optional and have a default value.
-
So a function like:
func lag col sort_col by_col=id = ( window col by by_col sort sort_col lag 1 )
...is called
lag
, takes three argumentscol
,sort_col
&by_col
, of which the first two much be supplied, the third can optionally be supplied withby_col:sec_id
.
- To create a column, we use
derive {column_name}: {calculation}
in a pipeline.derive
can also take a list of pairs. Technically this "upserts" the column — it'll either create or overwrite a column, depending on whether it already exists.- Potentially it would be possible to discriminate between those, but during the most recent discussion we didn't find a suitable approach.
- Previously the syntax was just
{column_name} = {calculation}
, but it breaks the pattern of every line starting with a keyword. - We could discard the
:
to just havederive {column_name} ({calculation})
, which would be more consistent with the other functions, but I think less readable (and definitely less familiar).
-
Most keywords that take a single argument can also take a list, so these are equivalent:
from employees -select salary +select [salary]
-
More examples in list-equivalence.md.
-
A line-break generally creates a pipelined transformation. For example:
from tbl select [ col1, col2, ] filter col1 = col2
...is equivalent to:
from tbl | select [col1, col2] | filter col1 = col2
-
A line-break doesn't created a pipeline in a few cases:
- Within a list (e.g. the
select
example above). - When the following line is a new statement, by starting with a keyword such
as
func
.
- Within a list (e.g. the
- See CTE Example.
- This is no longer point-free, but that's a feature rather than a requirement. The alternative is subqueries, which are fine at small scale, but become difficult to digest as complexity increases.
-
How functions represent the previous result — the previous result is passed as the final argument of a function; i.e.
aggregate
would be like this; whereX
is taken from the line above:aggregate by=[] calcs X
-
Raw syntax — I think we should have backticks represent raw SQL; i.e.
UPPER
could be defined as:func upper col = `UPPER(`col`)` # or with f-string-like syntax func upper col = `UPPER({col})` # or with " rather than ` func upper col = "UPPER({col})" # or with f" to preserve " func upper col = f"UPPER({col})"
-
Arrays — PRQL is in part inspired by DataPipes.jl, which demonstrates how effective point-free pipelines can be (Chain.jl is similar). One benefit of this is how well it deals with arbitrarily nested pipelines — which are difficult to read in SQL and even in
jq
. Could we do something similar for nested data in PRQL?-
Here's a snippet from
DataPipes.jl
— and we could avoid the macros /do
/end
):@p begin text strip split(__, "\n") map() do __ collect map() do __ __ == chars[begin] ? 1 : 0 end end hcat(__...)' end
-
-
Partials — how functional do we want to make the lang? e.g. should we have partial functions? e.g. [now based on an old version of
window
] potentially we don't need thecol
inlag
here?func lag col = window col by:sec_id sort:date lag:1
-
Boolean logic — how should we represent boolean logic like
or
? With someor
function that takes*args
(which we don't currently have a design for)? Or implement dyadic operators; eitheror
or||
? (Same fornot
) -
from
— do we needfrom
? A previous version of this proposal didn't require this — just start with the table name. But some initial feedback was that removingfrom
made it less clear. -
Readme syntax — we can't get syntax highlighting in GitHub's markdown — is there a solution to this aside from submitting a parser to GitHub / screenshots / creating a website?
- Currently we use
elm
as it coincidentally provides the best syntax highlight (open to suggestions for others!).
- Currently we use