Revisit GeolocationSensor.Accuracy and Latency
Opened this issue · 4 comments
Current definition https://w3c.github.io/geolocation-sensor/#geolocationsensor-accuracy does not seem to be consistent with platform accuracy definitions: e.g. Windows (https://docs.microsoft.com/en-us/uwp/api/windows.devices.geolocation.geocoordinate.accuracy) and Android (https://developer.android.com/reference/android/location/Location#getAccuracy()).
One possibility is to allow implementation to return accuracy confidence level in addition to accuracy in meters: dictionary accuracy{double radius, double confidence}
To add to that, on iOS, there is the concept of horizontalAccuracy
, which returns "[t]he radius of uncertainty for the geographical coordinate, measured in meters". This is part of the result of a call to requestLocation()
, where you can specify the desiredAccuracy
, a concept entirely absent from the current Geolocation ReadOptions
.
Essentially we need to get agreement on whether we want to allow a desired accuracy and/or desired latency (which might impact the particular other) as input parameters for GeolocationSensor.read()
.
For the legacy API, it is suggested to first get whatever position data quickly with navigator.geolocation.getCurrentPosition()
and then to watch it in case better data comes in via navigator.geolocation.watchPosition()
.
For Android, the low-level location strategies are well documented. In general, it is recommended to use higher-level abstractions that automatically deal with the concrete details.
For iOS, the desired accuracy can be requested from its location manager. Unlike with Android, where a specific technology (that implies a certain accuracy) can be chosen, iOS focuses more on directly human-language accuracy (like “best for navigation”).