ExplainMicroserviceCost Batch Inference Per Query
17zhangw opened this issue · 1 comments
17zhangw commented
ExplainMicroserviceCost
processes the explain "tree" for a given query and estimates the cost of the query using our models via the microservice. We currently make a network request for each node in the explain tree.
The overall goals of this task to restructure ExplainMicroserviceCost
is as follows:
- Insert some tooling/instrumentation to extract microservice inference times (can also be logged to sqlite)
- Make 1 network call to perform model inference for an entire query explain tree
- Allow for partial query explain tree cost inference (i.e., failing 1 node does not fail the entire tree)
- Batch logging the inference + real query results
lmwnshn commented
Forgot that we need to manually close things if we don't tag it as Fix...