NAM: neural amp modeler
This repository handles training, reamping, and exporting the weights of a model. For playing trained models in real time in a standalone application or plugin, see the partner repo, NeuralAmpModelerPlugin.
How to use (Google Colab)
If you don't have a good computer for training ML models, you use Google Colab to train
in the cloud using the pre-made notebooks under bin\train
.
For the very easiest experience, open
easy_colab.ipynb
on Google Colab
and follow the steps!
For a little more visibility under the hood, you can use colab.ipynb instead.
Pros:
- No local installation required!
- Decent GPUs are available if you don't have one on your computer.
Cons:
- Uploading your data can take a long time.
- The session will time out after a few hours (for free accounts), so extended training runs aren't really feasible. Also, there's a usage limit so you can't hang out all day. I've tried to set you up with a good model that should train reasonably quickly!
How to use (Local)
Alternatively, you can clone this repo to your computer and use it locally.
Installation
Installation uses Anaconda for package management.
For computers with a CUDA-capable GPU (recommended):
conda env create -f environment_gpu.yml
Otherwise, for a CPU-only install (will train much more slowly):
conda env create -f environment_cpu.yml
Then activate the environment you've created with
conda activate nam
Train models (GUI)
After installing, you can open a GUI trainer by running
nam
from the terminal.
Train models (Python script)
For users looking to get more fine-grained control over the modeling process, NAM includes a training script that can be run from the terminal. In order to run it
Download audio files
Download the v1_1_1.wav and overdrive.wav to a folder of your choice
Update data configuration
Edit bin/train/data/single_pair.json
to point to relevant audio files
"common": {
"x_path": "C:\\path\\to\\v1_1_1.wav",
"y_path": "C:\\path\\to\\overdrive.wav",
"delay": 0
}
Run training script
Open up a terminal. Activate your nam environment and call the training with
python bin/train/main.py \
bin/train/inputs/data/single_pair.json \
bin/train/inputs/models/demonet.json \
bin/train/inputs/learning/demo.json \
bin/train/outputs/MyAmp
data/single_pair.json
contains the information about the data you're training
on
models/demonet.json
contains information about the model architecture that
is being trained. The example used here uses a feather
configured wavenet
.
learning/demo.json
contains information about the training run itself (e.g. number of epochs).
The configuration above runs a short (demo) training. For a real training you may prefer to run something like,
python bin/train/main.py \
bin/train/inputs/data/single_pair.json \
bin/train/inputs/models/wavenet.json \
bin/train/inputs/learning/default.json \
bin/train/outputs/MyAmp
As a side note, NAM uses PyTorch Lightning
under the hood as a modeling framework, and you can control many of the Pytorch Lightning configuration options from bin/train/inputs/learning/default.json
the plugin)
Export a model (to use withExporting the trained model to a .nam
file for use with the plugin can be done
with:
python bin/export.py \
path/to/config_model.json \
path/to/checkpoints/epoch=123_val_loss=0.000010.ckpt \
path/to/exported_models/MyAmp
Then, point the plugin at the exported model.nam
file and you're good to go!
Other utilities
Run a model on an input signal ("reamping")
Handy if you want to just check it out without needing to use the plugin:
python bin/run.py \
path/to/source.wav \
path/to/config_model.json \
path/to/checkpoints/epoch=123_val_loss=0.000010.ckpt \
path/to/output.wav