obs: normalize web vs. API params to minimize differences in counts
Opened this issue · 1 comments
The me
command sometimes shows values different from the web. Any discrepancies in this display should be reduced, to the best of our ability.
Apart from outright bugs like #158 , discrepancies could also arise if there's any difference in the cache retention period of the bot's API calls vs. the web.
By experimentation, I've determined caching of project stats in user links from the project page leaderboard on the web is influenced by two different parameters:
v=#
is appended. this appears to be the timestamp in ms of the project's "updated_at" field and ensures that the display will change after the project stats are updatedttl=900
is appended to set the Cache-Control max-age = 15 minutes so that the API is not overtaxed by people refreshing displays before the project stats have been updated
Additionally, since effectiveness of setting the ttl depends on the URL being identical, we need to not only pass verifiable=any
, but also place_id=any
which I suppose unsets any default place_id filter normally supplied by regional iNat partner sites (e.g. the place_id for Canada for inaturalist.ca, etc.) That will fully align the URLs we are emitting to match what the user sees when they look at the leaderboard for the project, and therefore should result in greater consistency, as the same cached values should be used both on the iNat site and the bot display at any given time.
To sum up so far, adding to the bot obs#
link all four of these parameters: v
, ttl
, verifiable
, and place_id
, set to the values indicated above, should make the number on the web consistent with what the user would see if they looked at their numbers on the project leaderboard.
On the other hand, a simpler approach here, and one which might just align a bit better with the user's expectations (because they probably are looking at their own profile page for the ever
stat) would be to replace the ever
project in this display with the stats obtained from a /v1/users
call (i.e. the call that is used on the iNat site for user profile pages). But that doesn't negate the fact that in general, we don't handle this well across various displays that show similar stats (,tab
, ,project stats
, etc.) Therefore, if we go this route for this particular display, a separate issue should be made for it.
Finally, if after fixing this issue, we still occasionally see mismatches, and confirm we have done all we can to use the same API calls as on the iNat site, we need to determine what the reason for that is (e.g. perhaps caching artifacts remain) and then precisely document how and when such discrepancies may arise so that when users spot those issues, we'll have a ready answer for them.
It's not only observations that are affected, but also identifications. We observed today that with one user, after they had crossed over the 20K identifications threshold, Dronefly did not show the updated count in their ,me
display to match what they could see on their profile page, and even two hours later, the count was still not updated. It should be looked at separately whenever we get around to addressing the observation counts.