carbonblack/carbon-black-cloud-sdk-python

[BUG] Process.events() where processed_segments != total_segments may result in infinite loop

csanders-git opened this issue · 2 comments

I am seeing this behaviour on: (please complete the following information):
Python 3.8, latest Master branch

Describe the bug

  • It seems that there is some logic provided to deal with the not well defined concept of segments within the API. [1]
  • We have been experiencing an issue with the API whereby we've entered into what appears to be an infinite loop.
  • We have a response where _processed_segments != _total_segments (derived from Process.events).
  • In this condition the wrapper will continue to request this Event information in a while loop until _processed_segments == _total_segments
  • We have observed that the wrapper has been re-requesting the EventQuery POST request [2] for the past week without change to the process event data response.

[1] (

if self._processed_segments != self._total_segments:
)

2.

While debugging we continuously see

{'results': [...], 'num_found': 1, 'num_available': 1, 'total_segments': 5833, 'processed_segments': 5501}

Steps to Reproduce
Its somewhat unclear what how to reproduce this as processed_segments and total_segments don't have much documentation as to their purpose within official docs (https://developer.carbonblack.com/reference/carbon-black-cloud/platform/latest/platform-search-api-processes/).

What we do observe is that this shortcoming seems to be a result of either a failure on implementation of the backend or a misjudgement by the API wrapper on how long this reconciliation will take.

Expected behavior
We'd expect that if these responses may take seemingly infinite (or longer than the expected length of most application runs) amount of time to finish that there be a max loop iteration for these enrichments.

Pull request #89 adds a retry limit to the event query loops. The limit is 10 retries, but the counter is reset by advancement of the "processed_segments" count, as long as there is forward progress. @csanders-git please check out the latest from the develop branch and verify the fix.

This has been fixed with the latest release