Overlapping S3 metrics data for buckets
rahulreddy opened this issue · 1 comments
In a timeline for a bucket, it may happen that the bucket was created, destroyed and then created again with the same name. The operations count which are delta values will be unaffected by this since data points are created with timestamps. The absolute counters like incoming bytes, outgoing bytes, number of objects are reset when a delete bucket push metric call is received.
So with this context, if a list metrics call is made with timestamp range that involves the old and new bucket, the data will be incorrect. This solution is either to return an Internal Error or provide the metrics as two different items in the JSON response.
Adding a ticket ZENKO-1475 in Jira for tracking. Closing this as we're moving away from managing issues through GitHub.