Should 'limit' not, um, limit the number of items or pages returned?
Closed this issue ยท 7 comments
Warning: Github issues are not an official support channel for Recurly. If you have an urgent request we suggest you contact us through an official channel: support@recurly.com or https://recurly.zendesk.com
Per the documentation:
All client.*_list methods return a Pager. On the pager, pages() returns a PageIterator and items() returns an ItemIterator.
I have understand that I can pass a limit to the list method like so:
accounts = client.list_accounts(limit=200).items()
for account in accounts:
print(account.code)
However, doing so does not limit the number of items returned in any way that I would consider conventional. For example, the above code snippet will return exponentially more than 200 results from the Recurly account I'm working on.
I reached out to Recurly support, and they provided this somewhat confusing reply:
"The limit parameter impacts the number of results that are returned per API request. The Pager will continue to make API requests, in a manner that is you, until the entire record set is exhausted."
"For example, let's say we have a total of 100 Subscriptions in total. If you specify a limit of 100 then there will be a single API request that returns all 100 records. However, if you specify a limit of 50, then there will be 2 API requests required to consume all of the records."
I can't think of a use case for wanting to have two API requests for 50 records vs. one request for 100, nor another API where a limit
works this way.
Can you clarify how you expect limit to work in this client library?
Phillip.
HI @phillipadsmith, thank you for reaching out for more clarification about this. Rest assured that you are not the only person who finds the purpose of the limit
parameter to be confusing. We are in the process of providing a more detailed description of the purpose of the parameter and are even considering renaming it to per_page
in the next version of the client libraries.
As Recurly Support suggested, the limit
parameter only impacts the number of records that are returned in each API response. It does not impact the overall number of records that will be returned when consuming all of the available pages. The client libraries add to the confusion because the actual API requests are not directly visible to the programmer.
The limit
might be set to a value lower than the max (200) if timeouts are a potential issue. A developer could set a lower limit to avoid those timeouts.
If you are only interested in the first 50 records then you would set the limit
to 50 and break out of your loop once you've processed the records in the page.
start_of_year = datetime(2020, 1, 1, 0, 0, 0)
accounts = client.list_accounts(limit=50, begin_time=start_of_year.isoformat())
for page in accounts.pages():
for account in page:
print(account.code)
break
๐ Hey @douglasmiller, thank you for the detailed response (and confirmation of sanity!). I understand the use case now (however odd it may seem), and also the work-around to break out of the page loop and limit the number of records in the page. That'll work just fine, thanks!
@phillipadsmith we're also talking about adding a more official abstraction for your use case in the pagers. We're imagining equivalent of the pager.first()
method but instead it returns the first n
items as a list.
Hey there @bhelx, I actually just upgraded the client to be able to use the first()
method, and a first with n
would be awesome ๐ Thank you!
@phillipadsmith #396 implements this. Will discuss this with the team and try to get it into the mainline release.
Closing for now. We're working on documentation and take()
is out. We're also still likely going to change this name in the next version but leave this one for backward compatibility. Feel free to continue the discussion here or re-open if you feel like your issue hasn't been addressed.
Thanks @bhelx, greatly appreciated.