opengeospatial/NamingAuthority

Non-terrestrial Reference System Server for TB-19

Closed this issue · 11 comments

Testbed 19 will be working with non-terrestrial spatial, temporal, and spatial-temporal reference systems including the IAU Reference Systems referred to in #212 . For this purpose, we will need an on-line registry for the reference system definitions. Since the http://voparis-vespa-crs.obspm.fr:8080/ws/IAU/2015 link is unreliable, we would like the OGC Definitions Server to host the Testbed 19 definitions.

According to #212 this is a pass-through link to a web site which is either not available or incredibly slow. That makes it unusable. Furthermore, there are open issues with capturing non-terrestrial CRS definitions which would require changes to the current content. Easier to do on a register that we control.
To be clear, we are talking about a Beta registry. Formal approval of the entries is not necessary. Particularly since many of them are a work in progress.

pebau commented

the specific IAU/2015/ subdirectory has been redirected by OGC to what you describe. But generally, IAU/ is under our control; try https://www.opengis.net/def/crs/EPSG/0/4326 for the speed of the service overall. If there is an issue just contact me.

Drawback: currently supports only GML.

The /IAU/2015 tree is controlled by the Planetary DWG (under delegation from the OGC-NA) and is maintained by CNES. @cmheazel I ran a number of tests on the server and it is responding within a second as expected. Which requests is the server responding slowly to?

The /IAU/0 tree is maintained by Jacobs University, also under delegation from the OGC-NA.

Cc: @J-Christophe

@ghobona Fair enough.
Can we hold this issue open to capture feedback from Testbed 19?

@cmheazel We'll keep the issue open until Testbed-19 ends.

@ghobona I'll just mention that the CNES server when active has okay performance, but I have first-hand experience that it can go completely down for weeks at a time. Given the semi-static nature of the gml responses (they won't change for months-years at a time) I think it would make sense to cache the server responses in some way so that there isn't a single point of failure.

@AndrewAnnex This should not happen anymore since April 5th. The application is under docker and we have installed a daemon in systemd that monitors the container and restarts the container if a problem is detected or if the machine is rebooted. At worst, if a failure happens, it is eventually during a few seconds.

I think your problem was before this date, right ? if not, let me know

@J-Christophe yes I was seeing this issue a few months ago. I believe that the docker container is still only running on a single server, so by definition is still a single point of failure though? No backup servers nor is the docker container running on a service like AWS Fargate?

Possibly this is as side issue and not relevant for this issue.

@AndrewAnnex The decision to implement a overly complex architecture can have an extra-cost and detrimental effects on the environment in terms carbon footprint. In my organisation, we have guidelines in place to try to reduce our carbon footprint.

For the moment, the availability of the server and the application does not seem to me bad compared to the need but maybe I'm wrong (it's true that there were some problems at the beginning but everything has been solved since and I improved things again yesterday). If you notice any problem, don't hesitate to report it here: https://github.com/opengeospatial/Planetary-DWG/issues

Before implementing a more complex architecture, I would like to know if an availability requirement is defined by OGC-Naming and the needs behind to justify the architecture.

Secondly, I am not opposed to the data being hidden. But then, it's just a matter of responsibility and process.