Add support for queueing operations for postgres
Closed this issue · 1 comments
Postgres docs state
ReadyForQuery informs the frontend that it can safely send a new command. (It is not actually necessary for the frontend to wait for ReadyForQuery before issuing another command, but the frontend must then take responsibility for figuring out what happens if the earlier command fails and already-issued later commands succeed.)
At the moment we need to use a module like below to enforce that we don't send a new request to the Postgres server before we receive a ready_for_query for our current operation. We should explore adding similar functionality to the core protocol library and use that by default. Without this the current implementation is buggy if someone invokes multiple query operations on a single connection in parallel.
module Sequencer = struct
type 'a t = 'a * Lwt_mutex.t
let create t = t, Lwt_mutex.create ()
let enqueue (t, mutex) f = Lwt_mutex.with_lock mutex (fun () -> f t)
end
let run () =
let* conn = connect () in
let sequencer = Sequencer.create conn in
let+ () = Sequencer.enqueue sequencer run
and+ () = Sequencer.enqueue sequencer run in
Connection.close conn
Keeping the sequencing implementation in the io backends seems okay for now. In Lwt we can use a Lwt_mutex based option like above. Async has throttle, and on unix we could use mutexes. Closing this for now, but this can be re-opened later if we come up with a better approach.