This repository is a tensorrt deployment of the onsets and frames model, which is implemented using pytorch (https://github.com/jongwook/onsets-and-frames).
- c++ dependencies
- TensorRT-8.0.1.6
- OpenCV-4.2.0
- CUDA Toolkit-10.2
- cuDNN-8.2.2.26
- Eigen-3.3.7
- libsamplerate
- protobuf-3.11.4
- cnpy
- midifile
- python dependencies
- tensorrt-8.0.1.6 (c++ TensorRT/python)
- toch-1.7.1
- opencv-python-4.2
- onnx-1.8.0
- onnxruntime-1.8.0
- cudatoolkit-10.2
- torchvision-0.8.2
- torchaudio-0.7.2
- torch2trt (github)
- onnx-simplifier (github)
- librosa-0.8.1
- tensorboard-2.7.0
- mido
- mir_eval
- tqdm
detailed information can be found in torch1.7.yaml
see the READE file under python/onsets-and-frames/
1.train the model yourself or use the pretrained model in Baidu Netdisk https://pan.baidu.com/s/1SiW8A6DHa9du9RQyzouZSQ?pwd=8ir9 Extract code: 8ir9. Put the model file under models.
2.convert the pt file to onnx
cd python
python tools/convert_pt2onnx.py ../models/model-500000.pt ../models/model.onnx
3.convert the onnx to TensorRT engine.
python tools/convert_onnx2trt.py ../models/model.onnx ../models/model.trt
1.configure cpp/CMakeLists.txt
2.make
cd cpp && mkdir build && cd build
cmake ..
make
3.inference
./amt ../../models/model.trt ../../sample/MAPS_MUS-chpn-p19_ENSTDkCl.wav