In the folder beat is the implementation of the paper : Temporal convolutional networks for musical audio beat tracking by Matthew E. P. Davies and Sebastian Böck.
In the folder joint_beat_tempo tries to follow the paper Multi-task learning of tempo and beat learning one the improve the other by Sebastian Böck, Matthew E. P. Davies and Peter Knees.
You can clone this repository on your machine with the following line :
git clone git@github.com:camdore/Beat-and-tempo-tracking.git
In order to this repository to work you need several packages that you can install with the following command :
pip install -r requirements.txt
To train the model described in the paper, run this command :
python train.py --batch-size 64 --path-track "path/to/tracks" --path-beats "path/to/beats"
The hyperparameters are already in the code (learning rate, window size of F1 score)
These are for the beat only version, for the joint beat and tempo tracking add the argument --path-tempo.
To evaluate my model on a dataset, run :
python eval.py --path-track "path/to/tracks" --path-beats "path/to/beats --checkpoint-path "path/to/the/checkpoints/TCN_beat_only.ckpt"
No batch size because parameter post processing only accept batch size of 1.
These are for the beat only version, for the joint beat and tempo tracking add the argument --path-tempo adn change the checkpoint-path parameter.
The pretrained models weight are in the folder of this repository named checkpoints.
Our model achieves the following performance on :
Model name | F1-score |
---|---|
TCNModel (ours) | 0.823 |
PaperModel | 0.843 |