This tutorial is also accompanied with a PyTorch source code, it can be found in src
folder. Furthermore, all plots and metrics that I mentioned here can be found here in this link.
You can also run the code with wandb. First you shoule go to a src
directory, and run the following command:
python main.py
Machine learning experiment tracking, dataset versioning, and model evaluation.
- Create an account on wandb.ai.
- Install wandb.
pip install wandb
- Link your machine with your account. When logging in you should enter your private API key from wandb.ai.
wandb login
import wandb
wandb.init(project="my-funny-project")
wandb.init(·)
starts the tracking system metrics and console logs.
Different metrics like loss, accuracy can be easily done with wandb.log()
command. For example,
wandb.log({'accuracy': train_acc, 'loss': train_loss})
By default, wandb
plots all metrics in one section. If you want to divide sections as for a training, validation, etc. You can just simply add a section name to the metric name by slash.
For example, if you had two losses, training and validation losses. You can split sections as follows:
wandb.log({'train/loss': train_loss, 'val/loss': val_loss})
When using argparse
, you can use the command below and easily track hyperparameters you have used.
wandb.config.update(args) # adds all of the arguments as config variables
There are also other ways to save configuration values. For example, you can save configurationsa as a dictionary and pass it. Check more details here.
Add wandb.watch(model, log = 'all' )
to track gradients and parameters weights.
Visualisation of weights:
Visualisation of gradients:
- Create a sweep configuration file,
sweep.yaml
.
For example it may look like this:
program: train.py
method: bayes
metric:
name: validation_loss
goal: minimize
parameters:
learning_rate:
min: 0.0001
max: 0.1
optimizer:
values: ["adam", "sgd"]
- Initialize a sweep.
Run the following command:
wandb sweep sweep.yaml
- Launch agent(s)
wandb agent your-sweep-id