SiriDB/siridb-server

HTTP Pipelining issue in embedded api service

srdgame opened this issue · 17 comments

When I was testing query sql from one Firefox extension which called RESTer, the server is crashed on following code:

src/siri/api.c

static int api__headers_complete_cb(http_parser * parser)
{
    siri_api_request_t * ar = parser->data;

    assert (!ar->buf);

The reason is that, that Firefox extension sending multiple HTTP requests via same socket connection. Thus the headers_complete_cb called when server is executing message_complete_cb.

@srdgame , I've created a pull request which should solve this issue, can you verify if #165 solves the issue?

@joente Will test it ASAP, and provide you feedback then

I have tested with your fixes, siridb-server will not be crashed in that test case.

By digging into query_cb, it is ok to free ar->buf as the ar->query will be copied inside siridb_query_run. But I am not sure about other cases, e.g. insert post request from client?

@joente The same issue when I am using Grafana siridb-http plugin with 9020 port. After apply your patch, it seems that the query string does not been parsed correctly (at least the siri-server log does show the "Parsing query...."), when typing series name in explore page of Grafana. ( You have to input the series name quickly enough)

And by chance you will meet following issue, even you have correct series name

image

image

By having following changes, I have not meet the 'NOT FOUND' issue

image

You're correct, I was a bit too fast with this solution, I'm thinking to keep a flag which can control for when the buffer requires a reset.

@srdgame , I've updated the pull request

@joente Will test it now

@joente I have tested with my cases, it works.

And I am still worries about potential issue. As we have following errors in log:

image

Could it better that having atomic lock to wait until the previous query been complete?

HTTP 1.1

8.1.2.2 Pipelining

A client that supports persistent connections MAY "pipeline" its requests (i.e., send multiple requests without waiting for each response). A server MUST send its responses to those requests in the same order that the requests were received.

this might helps LINK

Could it better that having atomic lock to wait until the previous query been complete?

A lock would not work since this would block the main thread. I think some change is required to keep track of the order in which requests are received since it is required to write responses back in the same order.

@joente I have search for solution, it talks about parser pause. And I have tried to make code changes and in my repo:

kooiot@19fa5c1

Currently, I do not know whether the uv_read_stop is an must or not. It seems that parser pause already fixes my cases.

@joente I have search for solution, it talks about parser pause. And I have tried to make code changes and in my repo:

kooiot@19fa5c1

👍 nice, would it be possible to pause the parser on the message complete callback and resume on a the write callback?

(I've updated the pull request: 854e6d2)

(small change since the flags are not required anymore: 3d9c2ec)

Will make more testing today.

@joente All my tests are passed with the fixes