- Diffusion model implemented using libtorch
- Support for training and
inference - Customizable parameters for training and model configuration
- Pure C++ implementation
- C++17
- GCC 11.4.0
- CMake 3.30
- libtorch 2.1.0
- OpenCV 4.7.0 (for image processing)
- CUDAToolkit 12.3
-
Clone the repository:
git clone --recursive https://github.com/your_username/diffusion-model-cpp.git cd diffusion-model-cpp
-
Install dependencies:
sudo apt-get update sudo apt-get install libopencv-dev
-
Download and extract libtorch:
wget https://download.pytorch.org/libtorch/cu121/libtorch-cxx11-abi-shared-with-deps-2.3.1%2Bcu121.zip unzip libtorch-cxx11-abi-shared-with-deps-2.3.1+cu121.zip
-
Build the project using CMake:
mkdir build cd build cmake -DCMAKE_PREFIX_PATH=/path/to/libtorch .. make
-
Prepare your dataset and update the dataset path and log dir path in the configuration file
configs/sample.json
. -
Run the training program:
./build/src/train configs/sample.json
-
The model and training logs will be saved in the log directory.
- This project uses libtorch for implementing the diffusion model.
- OpenCV is used for image processing tasks.
- Inspiration and algorithms are based on recent research in the field of diffusion models.