jacoby/Plus2RSS

maxResults is 20, pagination needed

Closed this issue · 6 comments

I'm only getting the most recent 20 Google+ posts.

According to https://developers.google.com/+/api/ maxResults is 20 by default: "In requests that can respond with potentially large collections, such as activities list, each response contains a limited number of items set by maxResults (default: 20)."

It would be great if Plus2RSS supported pagination so I could download all of my posts.

I think I now need to start planning how to put in Getopt::Long

Hmm, I wonder why this issue was closed. INVALID or WONTFIX to use Bugzilla's terminology, I guess.

No, no. A thousand times no.

I looked at it, thought "Hey, this person wants to have a complete Plus backup thing. Not really the plan, but I can do that. I have very naive controls at the moment, so I'll have to use Getopt::Long to get a more robust set. And I'll have to re-read the API documentation to figure out what they should be. I can do that. I will do that. But I'll just leave a comment to indicate I've read it, and also give a reminder to myself of how I'll want to get into this."

Then I clicked a button.

"Closed? I marked it closed? NO!!!" But it wasn't immediately obvious how to re-open it, so I shut the laptop down and went to sleep, planning to get back to it when I got to the office this morning. Which I am.

I have to wonder at your usage case. I run this once an hour on the hour, relying on Google Reader to keep track of previously-grabbed items. If I posted more than 20 times in an hour to Plus, I'd think I was spamming.

If you're using this to grab all of them, you either 1) need to think about what RSS is about or 2) need to look at the Data Liberation Front ( http://www.dataliberation.org/takeout-products/google-stream ) to get all your posts that way.

That being said, it is not unreasonable nor likely that hard to add a maxResults tab, and I have added that to the code base. Handling pageToken would require me to re-engineer the code, to separate the parsing of the JSON and the generation of the RSS. If you have code for that, I'd accept patches, but until then, or until I get curious enough about how to make that work, I'll say no to adding the tokens.

I have to admit that the idea of using RSS as a backup format is new to me. Dave Winer discussed this at http://blog.stackoverflow.com/2011/11/se-podcast-27-dave-winer/

I am aware that Google+ has the whole Data Liberation "takeout" thing. In practice, I should probably just use that instead and forget about what Dave Winer had to say.

I do highly recommend listening to that podcast episode, though. :)

Well, that explains it a little. RSS is Dave Winer's idea, so for him, it's a bit of being the hammer that makes everything look like nails. Will listen to that podcast eventually, though.