kafka
library and utils for Crystal.
- x86_64 binary: https://github.com/maiha/kafka.cr/releases
- kafka:
1.0
- crystal:
0.26.1
require "kafka"
kafka = Kafka.new
kafka.topics.map(&.name) # => ["t1", ...]
kafka.produce "t1", "foo"
kafka.fetch "t1" # => Kafka::Message("t1#0:0", "foo")
kafka.close
- bin: standalone kafka utility applications (x86 static binary)
- lib: as crystal library
Add it to shard.yml
dependencies:
kafka:
github: maiha/kafka.cr
version: 0.7.0
require "kafka"
kafka = Kafka.new("localhost", 9092)
kafka.topics.map(&.name) # => ["t1", ...]
kafka.produce("t1", "test")
kafka.fetch("t1", 0, 0_i64) # => Kafka::Message("t1[0]#0", "test")
kafka.close
- type
make compile
that generatesbin/kafka-*
- type
make release
if you wish static and optimized binaries
% make compile
% make release
- kafka-broker : Show broker information. "-j" causes json output.
- kafka-cluster-watch : Report cluster information continually.
- kafka-error : Lookup kafka error code.
- kafka-fetch : Fetch logs from kafka. "-g" tries to resolve payload.
- kafka-info : Show topic information about offsets. (need only a broker)
- kafka-ping : Ping to a broker like unix ping.
- kafka-topics : Show topic information about leader, replicas, isrs. (need exact leaders)
- kafka-heartbeat : Send heartbeat request(api:12). [experimental]
- kafka-metadata : Send metadata request(api:3).
- kafka-offset : Send offset request(api:2).
% ./bin/kafka-info t1 t2
t2#0 count=18 [37, 36, 19]
t1#2 count=1 [1, 0]
t1#0 count=1 [1, 0]
t1#1 count=0 [0]
- count messages in all topics
% ./bin/kafka-info -c -a
2 a
0 b
bin/kafka-topics
shows topic names and metadatas
% ./bin/kafka-topics
t1
tmp
% ./bin/kafka-topics -c | sort -n
0 t1
6 tmp
% ./bin/kafka-topics t1 t2
t1(0 => {leader=1,replica=[1],isr=[1]})
ERROR: t2(UnknownTopicOrPartitionCode (3))
bin/kafka-ping
works like unixping
command
% ./bin/kafka-ping localhost
Kafka PING localhost:9092 (by HeartbeatRequest)
[2016-01-28 00:27:30 +0000] errno=16 from localhost:9092 req_seq=1 time=7.354 ms
[2016-01-28 00:27:31 +0000] errno=16 from localhost:9092 req_seq=2 time=3.433 ms
^C
--- localhost:9092 kafka ping statistics ---
2 requests transmitted, 2 received, ok: 2, error: 0
-g
option can be used for checking version
% ./bin/kafka-ping localhost -g
Kafka PING localhost:9092 (by HeartbeatRequest)
[2016-01-28 00:29:16 +0000] (0.8.x) from localhost:9092 req_seq=1 time=8.459 ms
...
- write reports about changing state into stderr
% ./bin/kafka-ping localhost -g
(stdout)
[2016-01-28 00:30:32 +0000] (0.8.x) from localhost:9092 req_seq=76 time=3.194 ms
[2016-01-28 00:30:33 +0000] (0.8.x) from localhost:9092 req_seq=77 time=3.122 ms
[2016-01-28 00:30:34 +0000] (broker is down) from localhost:9092 req_seq= time=0.511 ms
(stderr)
[2016-01-28 00:30:34 +0000] localhost:9092 : (0.8.x) -> (broker is down)
make compile
Run docker-compose run spec
, or simply make spec
.
make spec
Docker containers zk
and kafka brokers
are automatically created by docker-compose.
- MIT : This repository
- Apache 2.0 :
src/utils/zig_zag.cr
derives algorithm aboutvarint
from https://github.com/apache/kafka/blob/1.0/clients/src/main/java/org/apache/kafka/common/utils/ByteUtils.java
- Fork it ( https://github.com/maiha/kafka.cr/fork )
- Create your feature branch (git checkout -b my-new-feature)
- Commit your changes (git commit -am 'Add some feature')
- Push to the branch (git push origin my-new-feature)
- Create a new Pull Request
- maiha maiha - creator, maintainer