Neptune is an experiment tracker purpose-built for foundation model training.
With Neptune, you can monitor thousands of per-layer metrics—losses, gradients, and activations—at any scale. Visualize them with no lag and no missed spikes. Drill down into logs and debug training issues fast. Keep your model training stable while reducing wasted GPU cycles.
📚Examples
In this repo, you'll find examples of using Neptune to log and retrieve your ML metadata.
You can run every example with zero setup (no registration needed).
🎓How-to guides
👶 First steps
Docs
Neptune
GitHub
Colab
Quickstart
Track and organize runs
Monitor runs live
🧑 Deeper dive
Docs
Neptune
GitHub
Colab
Version datasets in runs
Programmatically manage projects
Compare datasets between runs
Resume run or other object
Use Neptune in HPO jobs
Use Neptune in pipelines
Reproduce Neptune runs
Restart runs from checkpoint
Use Neptune in distributed computing
Track models end-to-end
👨 Advanced concepts
Docs
Neptune
GitHub
Colab
Re-run failed training
Log from sequential pipelines
DDP training experiments
Use multiple integrations together
👑 Use cases
Neptune
GitHub
Colab
Text classification using fastText
Text classification using Keras
Text summarization
Time series forecasting
🧩Integrations and supported tools
Docs
Neptune
GitHub
Colab
Airflow
Altair
Amazon SageMaker (custom Docker containers)
Amazon SageMaker (PyTorch Estimator)
Azure ML
Bokeh
Catalyst
CatBoost
DALEX
Detectron2
Docker
Evidently
fastai
Folium (Leaflet)
GitHub Actions
Google Colab
Great Expectations
HTML
Kedro
Keras
lightGBM
Matplotlib
MLflow
MosaicML Composer
Optuna
pandas
Plotly
Prophet
Python
PyTorch
PyTorch Ignite
PyTorch Lightning
R
Sacred
scikit-learn
Seaborn
skorch
TensorBoard
TensorFlow
🤗 Transformers
XGBoost
ZenML
🛠️ Other utilities
🧳 Migration tools
GitHub
Import runs from Weights & Biases
Copy runs from one Neptune project to another
Copy models and model versions from model registry to runs