Implementation of different neural network-related utilities for CPUs and GPUs (CUDA).
So far, most of the utils are related to my need of working with images of different sizes grouped into batches with padding.
- Masking images by size
If you are grouping images of different sizes into batches padded with zeros, you may need to mask the output/input tensors after/before some layers. This layer is very handy in these cases.
- Adaptive pooling
Adaptive pooling layers included in several packages like Torch or PyTorch assume that all images in the batch have the same size. My implementation takes into account the size of each individual image within the batch to apply the adaptive pooling. Current layers include: Average and maximum adaptive pooling.
- C++14 compiler (tested with GCC 6.4.0 and 7.5.0).
- CMake 3.0.
- For GPU support: CUDA Toolkit.
- For running tests: Google Test.
- Python: 3.6, 3.7 and 3.8.
- PyTorch 1.6.0.
The installation process should be pretty straightforward assuming that you have correctly installed the required libraries and tools.
git clone https://github.com/jpuigcerver/nnutils.git
cd nnutils/pytorch
python setup.py build
python setup.py install
git clone https://github.com/jpuigcerver/nnutils.git
mkdir -p nnutils/build
cd nnutils/build
cmake ..
make
make install