The kamon-http4s
module brings traces and metrics to your http4s based applications.
Kamon kamon-http4s is currently available for Scala 2.11 and 2.12.
Supported releases and dependencies are shown below.
kamon-http4s | status | jdk | scala | http4s |
---|---|---|---|---|
1.0.8-1.0.10 | stable | 1.8+ | 2.11, 2.12 | 0.18.x |
1.0.13 | stable | 1.8+ | 2.11, 2.12 | 0.20.x |
To get started with SBT, simply add the following to your build.sbt
file:
libraryDependencies += "io.kamon" %% "kamon-http4s" % "1.0.13"
def serve[F[_]](implicit Effect: Effect[F], EC: ExecutionContext) : Stream[F, StreamApp.ExitCode] =
for {
_ <- Stream.eval(Sync[F].delay(println("Starting Google Service with Client")))
client <- Http1Client.stream[F]()
service = GoogleService.service[F](middleware.client.KamonSupport(client)) (1)
exitCode <- BlazeBuilder[F]
.bindHttp(Config.server.port, Config.server.interface)
.mountService(middleware.server.KamonSupport(service)) (2)
.serve
} yield exitCode
- (1): The Kamon Middleware for the
Client
side. - (2): The Kamon Middleware for the
Server
side.
object GoogleService {
def service[F[_]: Effect](c: Client[F]): HttpService[F] = {
val dsl = new Http4sDsl[F]{}
import dsl._
HttpService[F] {
case GET -> Root / "call-google" =>
Ok(c.expect[String]("https://www.google.com.ar"))
}
}
}
libraryDependencies ++= Seq(
"io.kamon" %% "kamon-core" % "1.1.2",
"io.kamon" %% "kamon-system-metrics" % "1.0.1",
"io.kamon" %% "kamon-prometheus" % "1.0.0",
"io.kamon" %% "kamon-http4s" % "1.0.7",
"io.kamon" %% "kamon-zipkin" % "1.0.1",
"io.kamon" %% "kamon-jaeger" % "1.0.2"
)
The last step in the process: start reporting your data! You can register as many reporters as you want by using the
Kamon.addReporter(...)
function:
Kamon.addReporter(new PrometheusReporter())
Kamon.addReporter(new ZipkinReporter())
Kamon.addReporter(new Jaeger())
Now you can simply sbt run
the application and after a few seconds you will get the Prometheus metrics
exposed on http://localhost:9095/ and message traces sent to Zipkin! The default configuration publishes the Prometheus
endpoint on port 9095 and assumes that you have a Zipkin instance running locally on port 9411 but you can change these
values under the kamon.prometheus
and kamon.zipkin
configuration keys, respectively.
All you need to do is configure a scrape configuration in Prometheus. The following snippet is a minimal example that should work with the minimal server from the previous section.
A minimal Prometheus configuration snippet
------------------------------------------------------------------------------
scrape_configs:
- job_name: 'kamon-prometheus'
static_configs:
- targets: ['localhost:9095']
------------------------------------------------------------------------------
Once you configure this target in Prometheus you are ready to run some queries like this:
Those are the Server Metrics
metrics that we are gathering by default:
- active-requests: The the number active requests.
- http-responses: Response time by status code.
- abnormal-termination: The number of abnormal requests termination.
- service-errors: The number of service errors.
- headers-times: The number of abnormal requests termination.
- http-request: Request time by status code.
Now you can go ahead, make your custom metrics and create your own dashboards!
Assuming that you have a Zipkin instance running locally with the default ports, you can go to http://localhost:9411 and start investigating traces for this application. Once you find a trace you are interested in you will see something like this:
Clicking on a span will bring up a details view where you can see all tags for the selected span:
That's it, you are now collecting metrics and tracing information from a http4s application.
Example of how to correctly configure the execution context by @cmcmteixeira