NVIDIA/DALI

Roadmap 2023

JanuszL opened this issue · 15 comments

The following represents a high-level overview of our 2023 plan. You should be aware that this roadmap may change at any time and the order below does not reflect any type of priority.

We strongly encourage you to comment on our roadmap and provide us feedback on this issue here.

Some of the items mentioned below are the continuation of the 2022 effort (#3774)

Improving Usability:

  • eager mode - extending support for using DALI operators as standalone entities and improving their interoperability with other libraries like VPF, CV-CUDA or MONAI
  • conditional execution - providing a convenient API to conditionally apply operation based on a predicate, providing AutoAugment style capabilities -
  • support for NVIDIA Grace Hopper Superchip, this includes flexible execution model utilizing fast CPU<->GPU memory transfers, where data can go from CPU to GPU and back to the GPU in single pipeline

Extending input format support:

  • Extending support of formats and containers with variable frame rate videos
    • decoding raw H264 and H265 streams from memory (#4480)
  • Support for higher dynamic ranges data (int32, float) through the whole data processing pipelines
  • Adding GPU acceleration for more image formats, like TIFF or new profiles of the existing one

Performance:

  • optimizing memory consumption
  • operators performance optimizations
    • O_DIRECT support mode support to fn.readers.tfrecord (#4820).
    • O_DIRECT mode support to fn.readers.numpy (#4796, #4848)

New transformations:

We are constantly extending the set of operations supported by DALI. Currently, this section lists the most notable additions to our areas of interest that we plan to do this year. This list is not exhaustive and we plan on expanding the set of operators as the needs or requests arise.

  • new transformations for general data processing
    • fn.experimental.tensor_resize operator (#4492)
  • new transformations for image processing
  • new transformations for video processing
    • the above image transformations are applicable to video as well

Hi, guys!
Is a planned release date for stable support for the Conditional Execution?

Hi @songyuc,

I think it is rather a matter of sufficient testing than feature completeness (in DALI 1.23 and the current nightly builds it is/going to be available in the experimental module). What we are focusing on now is checking the quality and performance in different uncases. I hope we can call it stable (not experimental anymore) in a couple of releases from now.

Hi @songyuc,

I think it is rather a matter of sufficient testing than feature completeness (in DALI 1.23 and the current nightly builds it is/going to be available in the experimental module). What we are focusing on now is checking the quality and performance in different uncases. I hope we can call it stable (not experimental anymore) in a couple of releases from now.

Thanks for your reponse!
I will try to apply it in DALI 1.23.

@pipeline_def(batch_size = 5, num_threads=2, device_id=0, py_num_workers=4, py_start_method='spawn')
def my_pipepline(shard_id, num_shards, batch_size):
    jpegs, labels = fn.external_source(
        source=create_callback(batch_size, shard_id, num_shards),
        num_outputs=2, batch=False, parallel=True, dtype=[types.UINT8, types.INT32])
    decode = fn.decoders.image(jpegs, device="mixed")
    return decode, labels

pipe = my_pipepline(batch_size = 10, shard_id=0, num_shards=2)

I have to provide 2 'batch_size' for external_source in 'my_pipepline' and 'pipeline_def'. Sometimes, 'batch_size' is a external arg and is not equal to the preseted value in 'pipeline_def'.

For example, in this case, i will get only 5 samples in each 10-sample-batch.

env: python 3. 11, pytorch 2.0, cuda118.

@prefer-potato,

I have to provide 2 'batch_size' for external_source in 'my_pipepline' and 'pipeline_def'. Sometimes, 'batch_size' is a external arg and is not equal to the preseted value in 'pipeline_def'.

I understand this may be inconvenient in some cases. The idea is that the batch size provided to the pipeline is the maximum one while the external source has the freedom of providing batches of variable length (in your case it is fixed but it doesn't have to be).

thank you for replying very much.🥰🥰

Hello, is there any plans in accelerating audio reading and compression (mp3)? Or is by any chance the different team (library) that you know of, who is working on that? Thanks for your work!

Hi @Etzelkut,

Thank you for reaching out.

Hello, is there any plans in accelerating audio reading and compression (mp3)? Or is by any chance the different team (library) that you know of, who is working on that?

We are internally discussing this. Could you tell us what are your use case? Is it about training or inference? What do you use now for the decoding?

Hello! Is there any plans on making readers for NIFTI file formats?

The three reasons I give is :

  1. most neuroimaging data, even one that were preprocessed by a pipeline, are in NIFTI formats.
  2. NIFTI formats contain meta-data that are useful.
  3. Not having to save two versions of the same file.

Reason 1

I cannot speak for all medical imaging people, but at least in neuroimaging, I believe that nii.gz, nii formats are mostly used as inputs/outputs of data preprocessing.

(below is a list of preprocessing pipelines that input/output NIFTI files)

  • Freesurfer (Structural MRI data preprocessing pipeline)
  • QSIPrep (diffusion MRI data preprocessing pipeline)
  • fMRIPrep (functional MRI data preprocessing pipeline)
  • ... and so on

Reason 2.

Moreover, unlike .npy files, NFITI file format also stores meta data specific to the image that are useful, such as dimension info, time for each slice (for example, the length of each image in 4D fMRI), affine matrix, size of each voxel, and so on. It may be helpful if those meta data can be accessed via DALI reader

Reason 3.

Due to the extra meta data that the NIFTI files contain, we cannot just delete the NIFTI files to make room for .npy files. This leads to us having two versions of the same files, one in .npy to be used for DALI, and another in .nii format to be used for other uses.

Thank you for reading this, and making a powerful tool!

Hi @dyhan316,

Thank you for reaching out.
Have you checked cuCIM library - it may provide the workflow you are looking for.
However, if you still prefer to use DALI, I would start with the external source operator and use one of the python libraries for the initial data loading inside it, like this one.

Hi @JanuszL,

We are internally discussing this. Could you tell us what are your use case? Is it about training or inference? What do you use now for the decoding?

Thanks for your reply! It is more related to training, because a lot of researchers, who work on audio, need to constantly load and sometimes save audio, and then move data from RAM (CPU) to GPU. This can be seen as one of the bottlenecks in training speed. Loading and then decoding .mp3 files are done on the CPU, and I was not able to find a suitable library that would do it on GPU. That would be very helpful in audio-related research if, in the future, there would be a library that would load and decode audio in GPU (similarly to images and video) but also encodes it back to mp3 (or change formats from .wav to .mp3) on GPU. Right now, we are using torchaudio.

Thank you @JanuszL for your suggestion :)

Hello !
Will non-experimental support for python 3.11 be added to this or the next year's roadmap?
We've been seeing growing adoption in 3.11 as a platform and having DALI officially support it would be great

Thank you in advance for reading this, have a nice day!

Hi @filippocastelli,

Your question comes right in time. It has just been added in #5196 and #5174 and will ship in 1.33 release.

Please continue using #5320.