Zipkin is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in microservice architectures. It manages both the collection and lookup of this data. Zipkin’s design is based on the Google Dapper paper.
This project includes a dependency-free library and a spring-boot server. Storage options include in-memory, JDBC (mysql), Cassandra, and Elasticsearch.
The quickest way to get started is to fetch the latest released server as a self-contained executable jar. Note that the Zipkin server requires minimum JRE 8. For example:
wget -O zipkin.jar 'https://search.maven.org/remote_content?g=io.zipkin.java&a=zipkin-server&v=LATEST&c=exec'
java -jar zipkin.jar
You can also start Zipkin via Docker.
docker run -d -p 9411:9411 openzipkin/zipkin
Once you've started, browse to http://your_host:9411 to find traces!
Check out the zipkin-server
documentation for configuration details, or docker-zipkin
for how to use docker-compose.
The core library is used by both Zipkin instrumentation and the Zipkin server. Its minimum Java language level is 6, in efforts to support those writing agent instrumentation.
This includes built-in codec for both thrift and json structs. A direct dependency on gson (json library) is avoided by minifying and repackaging classes used. The result is a 155k jar which won't conflict with any library you use.
Ex.
// your instrumentation makes a span
archiver = BinaryAnnotation.create(LOCAL_COMPONENT, "archiver", Endpoint.create("service", 127 << 24 | 1));
span = Span.builder()
.traceId(1L)
.name("targz")
.id(1L)
.timestamp(epochMicros())
.duration(durationInMicros)
.addBinaryAnnotation(archiver);
// Now, you can encode it as json or thrift
bytes = Codec.JSON.writeSpan(span);
bytes = Codec.THRIFT.writeSpan(span);
Zipkin includes a StorageComponent, used to store and query spans and dependency links. This is used by the server and those making custom servers, collectors, or span reporters. For this reason, storage components have minimal dependencies; many run on Java 7.
Ex.
// this won't create network connections
storage = CassandraStorage.builder()
.contactPoints("my-cassandra-host").build();
// but this will
trace = storage.spanStore().getTrace(traceId);
// clean up any sessions, etc
storage.close();
The InMemoryStorage component is packaged in zipkin's core library. It is not persistent, nor viable for realistic work loads. Its purpose is for testing, for example starting a server on your laptop without any database needed.
The MySQLStorage component currently is only tested with MySQL 5.6-7. It is designed to be easy to understand, and get started with. For example, it deconstructs spans into columns, so you can perform ad-hoc queries using SQL. However, this component has known performance issues: queries will eventually take seconds to return if you put a lot of data into it.
The CassandraStorage component is tested against Cassandra 2.2+. It stores spans as opaque thrifts which means you can't read them in cqlsh. However, it is designed for scale. For example, it has manually implemented indexes to make querying larger data more performant. This store requires a spark job to aggregate dependency links.
The ElasticsearchStorage component is tested against Elasticsearch 2.3. It stores spans as json and has been designed for larger scale. This store requires a spark job to aggregate dependency links.
The zipkin server receives spans via HTTP POST and respond to queries from its UI. It can also run collectors, such as Scribe or Kafka.
To run the server from the currently checked out source, enter the following.
# Build the server and also make its dependencies
$ ./mvnw -DskipTests --also-make -pl zipkin-server clean install
# Run the server
$ java -jar ./zipkin-server/target/zipkin-server-*exec.jar
Releases are uploaded to Bintray.
Snapshots are uploaded to JFrog after commits to master.
Released versions of zipkin-server are published to Docker Hub as openzipkin/zipkin
.
See docker-zipkin for details.
http://zipkin.io/zipkin contains versioned folders with JavaDocs published on each (non-PR) build, as well as releases.