Tronald/CoordinateSharp

Coordinate.TryParse reduces value by 1 in UTM coordinates

Closed this issue · 4 comments

Describe the bug
Given input 54H 754082mE 5597699mN method Coordinate.TryParse(input, out var result) produces result 54H 754082mE 5597698mN when calling result.UTM.ToString()

To Reproduce

string input = "54H 754082mE 5597699mN";
if (Coordinate.TryParse(input, out var result))
{
   if (input != result.UTM.ToString())
     Asset.False();
}

Expected behavior
Given input 54H 754082mE 5597699mN TryParse should return exactly the same value.

Environment (please complete the following information):

  • .NET Core 3.1
  • CoordinateSharp 2.7.3.2
  • Web API

Thank you for checking out CoordinateSharp.

The meter precision loss you are seeing is due to the fact that CoordinateSharp operates in geodetic lat/long (angular system). When you "parse" a UTM coordinate, the values are actually converted to lat/long for verification and operation. The provided coordinate converts to the following.

Signed Lat/Long: -39.73276, 143.964833

When the coordinate is then converted back to UTM from the stored lat/long coordinate, it actually represents 54H 754082mE 5597698mN (1 meter northing precision loss). This is a limitation in the math based conversion between UTM and Lat/Long and there is really no way around it. You generally will not see more then 1 or 2 meters of precision loss.

If the precision loss is too great for your use case, you may need to build a custom UTM parser and then construct your UTM coordinate using the UniversalTransverseMercator constructor.

Thanks for the explanation. I still believe this is incorrect behavior.
The issue is that the problem compounds with each conversion.
What happens in our case:

  1. User enters UTM coordinates.
  2. We convert the coordinates using CoordinateSharp and store them.
  3. For whatever reason user wants to update the object with UTM coordinates.
  4. On save we convert the coordinates again. At which point the precision loss is now 2 meters.
  5. Repeat for every object save.

Could the limitation be due to the fact that double is used for these maths? Would it be an option to switch to something precise, like decimal?

Ok, I think I understand your problem a bit more.

Any float point precision loss due to working in double should only be experienced at the centimeter level of the Easting and Northing values, so that shouldn't be a factor. I believe the issue you are having is mainly due to the truncation of values that occurs per standard.

Easting and Northing values are truncated when .ToString() is called as is formats the coordinate to a compliant format. If you call the values individually like Coordinate.UTM.Easting however, you will receive the actual values and not the truncated ones. This would greatly reduce your precision loss.

Complete precision loss is unavoidable between systems as far as my knowledge goes (at least in terms of the conversions formulas I have worked with). If you remove truncation from the equation, you should get a lot of it back though.

I would consider this "UTM Centimeter" when working past the decimal.

My advice is to call the values individually and then round them if you need to maintain that conversion integrity back and forth. Just know, you are going against the standard conversion methods of the system in which you usually truncate (not round), but for your purpose this makes sense I believe.

So if we call the UTM northing value in the example you provided after converting we have.

5597699mN -> 5597698.9998664mN

Truncation rules are shaving that meter precision off, but if you switch to rounding you will get it back as you can see the actual precision lost is incredibly small (in this instance anyway).

You will most likely need to build your own string from the individual UTM properties to get this behavior.

Example

string s = $"{coord.UTM.LongZone}{coord.UTM.LatZone} {Math.Round(coord.UTM.Easting)}mE {Math.Round(coord.UTM.Northing)}mN"

You still may lose more precision depending upon where you are operating, but I believe this should mitigate a lot of the issues that are applicable to your use case.

Give it a shot and report back to let me know if it fixes the issue.

Thanks, I could see the precision in internal values was greater, but didn't think to just construct the string myself.
This works and as far as I can see solves our original issue - the value stay the same no matter how many times it is parsed and converted.