Consider creating compressed netcdf files
Opened this issue · 5 comments
The output NetCDF files are not compressed. This can easily be checked with source:
ncdump -s -h ../path/to/output.nc | grep -i deflate
Creating compressed netcdfs will greatly reduce the file size and is something we should consider.
Good tip. I will look into it.
Some years ago, I did some tests with the netcdf compression levels (1-9). The compression level no.4 was the best in terms of speed and compression. The tests were done with matlab's snctools (a) but I guess the same should hold true for the other netcdf libraries.
Additionally, we can reduce the file size, by converting the original double precision data to integer* ones, after converting the original values (e.g. water level in meters to mm) without loosing accuracy.
The compression level no.4 was the best in terms of speed and compression.
I agree that as a rule of thumb it is a good choice.
by converting the original double precision data to integer* ones, after converting the original values (e.g. water level in meters to mm) without loosing accuracy
True, that being said, depending on the application, even losing accuracy is something that can occasionally be considered.
Implementing this will require to pass a parameter named encoding
to the to_netcdf()
calls.
http://xarray.pydata.org/en/stable/generated/xarray.Dataset.to_netcdf.html
encoding (dict, optional) – Nested dictionary with variable names as keys and dictionaries of variable specific encodings as values, e.g., {"my_variable": {"dtype": "int16", "scale_factor": 0.1, "zlib": True}, ...}
The h5netcdf engine supports both the NetCDF4-style compression encoding parameters {"zlib": True, "complevel": 9} and the h5py ones {"compression": "gzip", "compression_opts": 9}. This allows using any compression plugin installed in the HDF5 library, e.g. LZF.
Not necessarily related to the issue at hand, but the following links provide some interesting insights WRT data formats and compression:
https://docs.dask.org/en/latest/best-practices.html#store-data-efficiently
https://docs.dask.org/en/latest/dataframe-best-practices.html#store-data-in-apache-parquet-format
If the netcdf files are being created with the xarray.DataSet.to_netcdf()
method, then we need to provide a value for the encoding
parameter. It should be a dictionary of the form
{"my_variable": {"dtype": "int16", "scale_factor": 0.1, "zlib": True}, ...}
http://xarray.pydata.org/en/stable/generated/xarray.Dataset.to_netcdf.html