Should sp::CRS() cache CRS objects?
Opened this issue · 2 comments
In debugging the failure of spatsoc on PROJ 8.0.0 ropensci/spatsoc#34 which turned out to be a false warning from rgdal because verbatim "DATUM" was missing from the WKT2 representation (for WGS84, it says ENSEMBLE, not DATUM), I saw https://github.com/ropensci/spatsoc/blob/f333bbe8c772b9d4c64d6edfcd3bd464d5f6b68e/R/build_lines.R#L134-L142, with calls to sp::CRS()
in an lapply()
. From Rprof()
, the churn through rgdal::checkCRSArgs_ng()
to Call()
is significant. The argument to sp::CRS()
is here always the same.
Shall I start on drafting code for caching returned "CRS"
objects? Might this extend to sf too? My initial take would be based on parseGRASS()
caching in rgrass7 https://github.com/rsbivand/rgrass7/blob/911dad09e253025cfa304dc3dd6f17b8afd808ed/R/xml1.R#L1-L130, where the lookup is done if the input argument is matched in list names.
Why not leave that optimisation as a task for spatsoc
? Caching is one of the things we really shouldn't go into, I believe - "There are only two hard things in Computer Science: cache invalidation and naming things."
In a contrived example, sp:CRS() is reduced from 9.00s to 0.12s by caching in the rgrass7 style, which is basically using a list as an associative array. At some stage I'll run revdeps with sp with caching. From reading package code, tests and examples, many (very many) use the same old Proj.4 string representation in loops and *apply, as though looking up the CRS/crs was simply assigning a string, not checking a string against an external SQL database. It was originally just assigning a string, and I think many see it like that.