Systematic Street View Sampler in the Proceedings of the 15th Conference on Computer and Robot Vision, 2018.
Authors: Kevin Dick, Francois Charih, Yasmina Souley Dosso, Luke Russell, Dr. James R. Green
Google Street View and the emergence of self-driving vehicles afford an unprecedented capacity to observe our planet. Fused with dramatic advances in artificial intelligence, the capability to extract patterns and meaning from those data streams heralds an era of insights into the physical world. In order to draw appropriate inferences about and between environments, the systematic selection of these data is necessary to create representative and unbiased samples. To this end, we introduce the Systematic Street View Sampler (S3) framework, enabling researchers to produce their own user-defined datasets of Street View imagery. We describe the algorithm and express its asymptotic complexity in relation to a new limiting computational resource (Google API Call Count). Using the Amazon Mechanical Turk distributed annotation environment, we demonstrate the utility of S3 in generating high quality representative datasets useful for machine vision applications. The S3 algorithm is open-source and available here along with the high quality dataset representing power infrastructure in rural regions of southern Ontario, Canada.
The S3 algorithm comes in two flavours: Python and Node.js.