com-lihaoyi/cask

Need observability hook for "failed" routes.

jsuereth opened this issue · 1 comments

I've been trying to set up distributed tracing for Cask, and found a lack of hooks I need to set things up.

A few requests (or help in docs):

  • Decorators that are able to know the http status code of the result
  • A mechanism to register a decorator for all failed routes (for metrics + traces)
  • Some (efficient) way to annotate the size of request/response bytes

If you're curious what we have, here's a simple library I threw together to give Cask the "best possible" OpenTelemetry HTTP experience: https://github.com/GoogleCloudPlatform/scala-o11y-cui-showcase/tree/main/utils/src/main/scala/com/google/example/o11y/cask

The meat of the implementation is all within the @traced decorator

TBH this is something that you will probably have to dig through the code and propose to us, rather than the other way around. The data model and code structure of Cask was not designed for distributed tracing, so it likely does not include all the hooks you need to support it out of the box, though they could probably be added

Decorators that are able to know the http status code of the result

I suspect decorators can already know that, if the inner-decorator provides one, but not 100% sure off the top of my head. If not, it should be add-able

A mechanism to register a decorator for all failed routes (for metrics + traces)

We have hooks to register what to do in case of failure, though not sure if that's sufficiently flexible for what you need to do

def handleNotFound() = Main.defaultHandleNotFound()
def handleMethodNotAllowed() = Main.defaultHandleMethodNotAllowed()
def handleEndpointError(routes: Routes,
metadata: EndpointMetadata[_],
e: cask.router.Result.Error) = {
Main.defaultHandleError(routes, metadata, e, debugMode)
}

Some (efficient) way to annotate the size of request/response bytes

Currently, Cask only can create responses, not requests. This is up to the individual decorators, e.g. by default we stream JSON responses to getJSON and postJSON decorators and so don't actually have a good count of how big they are up front. This could be added, e.g. maybe we can materialize the responses in memory up to a configurable maximum size, allowing us to provide up-front sizes for small/medium responses while still streaming large responses.