pacman82/odbc-api

Allow `set_num_rows` larger than initial capacity?

Closed this issue · 1 comments

When writing data in batches (say that are coming from a channel or a futures::Stream), we may not know their exact size until we receive them.

This implies that when we initialize the buffers to bind to an INSERT statement, any capacity we declare when initializing the buffers upfront is either incorrect (it may error for some large batch), or must be very large (causing a large memory usage).

Given that the capacity is transmitted to the ODBC on every call (via RowSetBuffer::row_array_size), it seems that we could allow set_num_rows to be any number?

The alternative is to re-allocate new buffers if the batch increases. I noticed that set_num_rows just drops old vectors and allocates new ones, so maybe that is the solution here? re-create new buffers if a batch is too large?

Not needed, we can just re-initialize buffers :)