captif-nz/pyramm

Improve caching

Opened this issue · 3 comments

A local cache is currently used when duplicate requests are made to the remote API. This returns the locally stored data instead of retreiving the data from remote API again.

Table data is only cached for one day as there is currently no mechanism to check if it is out of date. Adding a mechanism check if the data is out of date and update the local copy would reduce the number of calls to the API and speed up data retreival.

The actual local storage could take two forms:

  1. Modify the existing temporary file storage to allow it to be updated.
  2. Convert to a local SQLite database.

Both options would require a mechanism to check if the local data is out of date.

I need to work on multiple databases within one setting. What would be the best way to clear this local cache each time I need to access a new database? Any help would be greatly appreciated! Thanks

@Winda2 The quick workaround would be to call _ = [path.unlink() for path in pyramm.cache.TEMP_DIRECTORY.glob("*")]. This will step through each file in the file cache directory and delete it (keeping the parent directory itself will be kept).

A better long term solution is for the file cache system to include the RAMM database name. This way the file cache will work with multiple databases. New issue added: #30

Thanks @johnbullnz that was so helpful. Appreciate it!