appelmar/gdalcubes

ability to track & restart timeouts?

Opened this issue · 0 comments

When a cube is constructed with a large vector of asset URLs (e.g. from a stac catalog), it's not uncommon for some of those resources to time out. The most straight forward example can be produced with the microsoft planetary computer STAC API, where the signed URL signatures expire after 45 minutes. This means that a long-running computation on remote assets can fail silently to get many assets. (A similar issue can occur in other contexts where an arbitrary URL times out, though in such cases we can perhaps just set the GDAL configuration regarding timeout and retries to be more stubborn?) In any event, it would be very helpful if the a list of URLs which fail could be logged and retrieved in some way? (Maybe this already exists?) Given the lazy-eval design this probably couldn't be on function return, but maybe failed URLs could be logged to disk somewhere and a helper function could be used to retrieve (and possibly re-query or re-sign and then requery?) such URLs?