krojew/cdrs-tokio

Paging

Closed this issue · 13 comments

I was wondering if there is any way of using paging as described here: https://docs.datastax.com/en/developer/java-driver/4.14/manual/core/paging/ ?
I am trying to have a select query, which returns many rows, combined to about 60Mb. the performance is slow (~0.8 sec). I think this is a paging issue.

let response = session
        .query_with_params(
            SELECT_CQL,
            StatementParamsBuilder::new()
                .with_values(query_values!(id, month))
                .build(),
        )
        .await?
        .response_body()?
        .into_rows();

Thank you!

When I try to use it like here:

let mut query_pager = pager.query_with_param(
q,
QueryParamsBuilder::new()
.with_values(query_values!(1, 2))
.build(),
);
I get {unknown} no method named 'query_with_params' found for struct 'SessionPager' in the current scope
it looks like it been added after the release of 7.0.1?

That method has been available for 2 years: https://github.com/krojew/cdrs-tokio/blame/master/cdrs-tokio/src/cluster/pager.rs#L76 . You would need to show your whole code to see what's wrong.

Sorry I missed that the regular query function is query_with_params while the one for pager is query_with_param (without the s). Do you think it will be a good thought to make them the same?

I looked at the example here:

let q = "SELECT * FROM test_ks.another_test_table where a = ? and b = 1 and c = ?";
let mut pager = session.paged(3);
let mut query_pager = pager.query_with_param(
q,
QueryParamsBuilder::new()
.values(query_values!(1, 2))
.finalize(),
);
// Oddly enough, this returns false the first time...
assert!(!query_pager.has_more());
let rows = query_pager.next().await.expect("pager next");
assert_eq!(3, rows.len());
let rows = query_pager.next().await.expect("pager next");
assert_eq!(3, rows.len());
let rows = query_pager.next().await.expect("pager next");
assert_eq!(3, rows.len());
let rows = query_pager.next().await.expect("pager next");
assert_eq!(1, rows.len());
assert!(!query_pager.has_more());

and I wonder if there is a way to iterate through the rows in async way?

I tried:

loop {
        futures.push(tokio::spawn(query_pager.next()));
        if !query_pager.has_more() {
            break;
        }
    }

but could not be done due borrowing.

Of course - pager is 100% async. The problem with your loop is that you spawn a task for each next call. That''s not how async in Rust should be used - simply call query_pager.next().await in your loop.

But then it will await for each iteration to be done. it is not concurrent in this way

No - that's not how async works. Each await point switches to another task to proceed concurrently (at least in most runtimes). You can read more about async here: https://rust-lang.github.io/async-book/

I have this code:

loop {
        let now = Instant::now();
        for row in query_pager.next().await? {
              results.push(row);
        }

        println!("inside: {}", now.elapsed().as_millis());
        if !query_pager.has_more() {
            break;
        }
    }

which doesn't seem to execute concurrently. the sum of the time in each iteration is the total of the whole loop.

It executes concurrently with other tasks run by the async executor. What you seem to want is having a parallel pager, which is not possible, since the pager needs to store current state.

The performance is very similar to the the one without the pager. Isn't it should be different?

That depends on the situation. If the performance is the same, that suggests it's not very dependent on the amount of data, but other factors. Pager will usually not give you better performance in itself - its main goal is to split the data when it gets large enough. You can use the pager in combination with async tasks to process rows concurrently as they come, instead of processing on single data set. That should give you a performance boost.

I see. Thank you so much it was very helpful!

stale commented

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.