/ˈkwɪr..ɪŋ/
qryn is a polyglot LogQL API built on top of ClickHouse with native support for popular data ingestion formats
- Built in Explore UI and LogQL CLI for querying and extracting data
- Native Grafana 1 and LogQL APIs for querying, processing, ingesting, tracing and alerting 2
- Powerful pipeline to dynamically search, filter and extract data from logs, events, traces and beyond
- Ingestion and PUSH APIs transparently compatible with LogQL, PromQL, InfluxDB, Elastic and more
- Ready to use with Agents such as Promtail, Grafana-Agent, Vector, Logstash, Telegraf and many others
- Cloud native, stateless and compact design
Get started using our Documentation or consult the Wiki 💡
qryn
(/ˈkwɪr..ɪŋ/) 👍
qryn implements a complete LogQL API buffered by a fast bulking LRU sitting on top of ClickHouse tables and relying on its columnar search and insert performance alongside solid distribution and clustering capabilities for stored data. qryn does not parse or index incoming logs, but rather groups log streams using the same label system as Prometheus. 2
qryn implements a broad range of LogQL Queries to provide transparent compatibility with the Loki API
The Grafana Loki datasource can be used to natively query logs and display extracted timeseries
🎉 No plugins needed
- Log Stream Selector
- Line Filter Expression
- Label Filter Expression
- Parser Expression
- Log Range Aggregations
- Aggregation operators
- Unwrap Expression.
- Line Format Expression
🔥 Follow our examples to get started
qryn supports input via Push API using JSON or Protobuf and it is compatible with Promtail and any other LogQL compatible agent. On top of that, qryn also accepts and converts log and metric inserts using Influx, Elastic, Tempo and other common API formats.
Our preferred companion for parsing and shipping log streams to qryn is paStash with extensive interpolation capabilities to create tags and trim any log fat. Sending JSON formatted logs is suggested when dealing with metrics.
qryn implements custom query functions for ClickHouse timeseries extraction, allowing direct access to any existing table
Convert columns to tagged timeseries using the emulated LogQL 2.0 query format
<aggr-op> by (<labels,>) (<function>(<metric>[range_in_seconds])) from <database>.<table> where <optional condition>
avg by (source_ip) (rate(mos[60])) from my_database.my_table
sum by (ruri_user, from_user) (rate(duration[300])) from my_database.my_table where duration > 10
Convert columns to tagged timeseries using the experimental clickhouse
function
clickhouse({ db="my_database", table="my_table", tag="source_ip", metric="avg(mos)", where="mos > 0", interval="60" })
parameter | description |
---|---|
db | clickhouse database name |
table | clickhouse table name |
tag | column(s) for tags, comma separated |
metric | function for metric values |
where | where condition (optional) |
interval | interval in seconds (optional) |
timefield | time/date field name (optional) |
Check out the Wiki for detailed instructions or choose a quick method:
Clone this repository, install with npm
and run using nodejs
14.x (or higher)
npm install
CLICKHOUSE_SERVER="my.clickhouse.server" CLICKHOUSE_AUTH="default:password" CLICKHOUSE_DB="qryn" node qryn.js
Install qryn
as global package on your system using npm
sudo npm install -g qryn
cd $(dirname $(readlink -f `which qryn`)) \
&& CLICKHOUSE_SERVER="my.clickhouse.server" CLICKHOUSE_AUTH="default:password" CLICKHOUSE_DB="qryn" qryn
sudo npm install -g qryn pm2
cd $(dirname $(readlink -f `which qryn`)) \
&& CLICKHOUSE_SERVER="my.clickhouse.server" CLICKHOUSE_AUTH="default:password" CLICKHOUSE_DB="qryn" pm2 start qryn
pm2 save
pm2 startup
For a fully working demo, check the docker-compose example
The project uses pino for logging and by default outputs JSON'ified log lines. If you want to see "pretty" log lines you can start qryn with npm run pretty
The following ENV Variables can be used to control qryn parameters and backend settings.
ENV | Default | Usage |
---|---|---|
CLICKHOUSE_SERVER | localhost | Clickhouse Server address |
CLICKHOUSE_PORT | 8123 | Clickhouse Server port |
CLICKHOUSE_DB | qryn | Clickhouse Database Name |
CLICKHOUSE_AUTH | default: | Clickhouse Authentication (user:password) |
CLICKHOUSE_PROTO | http | Clickhouse Protocol (http, https) |
CLICKHOUSE_TIMEFIELD | record_datetime | Clickhouse DateTime column for native queries |
BULK_MAXAGE | 2000 | Max Age for Bulk Inserts |
BULK_MAXSIZE | 5000 | Max Size for Bulk Inserts |
BULK_MAXCACHE | 50000 | Max Labels in Memory Cache |
LABELS_DAYS | 7 | Max Days before Label rotation |
SAMPLES_DAYS | 7 | Max Days before Timeseries rotation |
HOST | 0.0.0.0 | HTTP API IP |
PORT | 3100 | HTTP API PORT |
QRYN_LOGIN | undefined | Basic HTTP Username |
QRYN_PASSWORD | undefined | Basic HTTP Password |
READONLY | false | Readonly Mode, no DB Init |
FASTIFY_BODYLIMIT | 5242880 | API Maximum payload size in bytes |
FASTIFY_REQUESTTIMEOUT | 0 | API Maximum Request Timeout in ms |
FASTIFY_MAXREQUESTS | 0 | API Maximum Requests per socket |
FASTIFY_METRICS | false | API /metrics exporter |
TEMPO_SPAN | 24 | Default span for Tempo queries in hours |
TEMPO_TAGTRACE | false | Optional tagging of TraceID (expensive) |
DEBUG | false | Debug Mode (for backwards compatibility) |
LOG_LEVEL | info | Log Level |
HASH | short-hash | Hash function using for fingerprints. Currently supported short-hash and xxhash64 (xxhash64 function) |
©️ QXIP BV, released under the GNU Affero General Public License v3.0. See LICENSE for details.