cjcoats/ioapi-3.2

netCDF4 interfaces

aidanheerdegen opened this issue · 4 comments

Are there any plans to upgrade the code to be netCDF4 compliant? It would be great to be able to add new features such as compression to the code.

Aidan Heerdegen wrote:

Are there any plans to upgrade the code to be netCDF4 compliant? It would
be great to be able to add new features such as compression to the code.

There are two issues currently in-progress:

 PnetCDF support for MPI-distributed CMAQ

 Use netCDF-3 ("NF_*) interfaces throughout (instead of netCDF-2 "NC*")
 replacing over 900 calls that date back to the eary Nineties before
 netCDF3:  recent netCDF-Fortran versions have dropped the "NC*()"
 interfaces.  As part of this, add conditional support for INTEGER*8
 variables.

Note that linking with HDF-enabled netCDF-4 is much more complicated
than linking with "disable-NC4" -- and already, the majority of the
I/O API support requests are due to link-failures due to compiler
issues ;-(

Note also that one of the software-requirements for the I/O API is:
fast random access to any data in very-long (even decades-long) files
(such as occur in hydrological applications of the I/O API).
HDF style compression decidedly does NOT satisfy this requirement.

Carlie J. Coats, Jr., Ph.D. cjcoats@email.unc.edu
Senior Software Engineer I/O API Author/Maintainer
Center for Environmental Modeling for Policy Development,
UNC Institute for the Environment www.ie.unc.edu
100 Europa Dr., Suite 490 Rm 4051 / Campus Box 1105 919.843.5951
Chapel Hill, NC 27599-1105 Fax 919.966.9920

Thanks for the prompt reply.

Note also that one of the software-requirements for the I/O API is:
fast random access to any data in very-long (even decades-long) files
(such as occur in hydrological applications of the I/O API).
HDF style compression decidedly does NOT satisfy this requirement.```

I'm curious, why does appropriate chunking not satisfy this requirement?

http://www.unidata.ucar.edu/blogs/developer/en/entry/chunking_data_why_it_matters

Aidan Heerdegen wrote:

Thanks for the prompt reply.

Note also that one of the software-requirements for the I/O API is:
fast random access to any data in very-long (even decades-long) files
(such as occur in hydrological applications of the I/O API).
HDF style compression decidedly does NOT satisfy this requirement.```

I'm curious, why does appropriate chunking not satisfy this requirement?

http://www.unidata.ucar.edu/blogs/developer/en/entry/chunking_data_why_it_matters

The trouble is in chasing down all the index-blocks necessary to find
a particular timestep.

-- Carlie

Thanks for the feedback. Feel free to close this issue.