Table of Contents / 目录:
- PyTorch tutorials, examples and books
- PyTorch 版本变化及迁移指南
- PyTorch for Numpy users 给Numpy用户的PyTorch指南
- PyTorch 1.0 tutorials and examples
- Books and slides about PyTorch 书籍、PPT等
- PyTorch深度学习:60分钟入门与实战
- Learning PyTorch with Examples 用例子学习PyTorch
- 计算机视觉与PyTorch
- PyTorch1.0-Zero-To-All
- Udacity: Deep Learning with PyTorch
- Deep Learning Course Slides and Handout - fleuret.org
- How to run? 推荐的运行方式
-
PyTorch 1.0 稳定版 已经发布,还有什么理由不更新呢~
-
版本变化及迁移指南见这里。
- 表格过长,请点击这里
- PyTorch-basics PyTorch基础
- Linear-regression 线性回归
- Logistic-regression Logistic 回归
- optimizer 优化器
- neural-network 神经网络
- convolutional-neural-network(CNN) 卷积神经网络
- famous-CNN 经典的CNN网络
- Using Pretrained models 使用预训练的模型
- Dataset-and-Dataloader 自定义数据读取
- custom-dataset-example 定义自己的数据集
- visdom-visualization visdom可视化
- tensorboard-visualization tensorboard可视化
- semantic-segmentation 语义分割
Note: some of these are old version; 下面的书籍部分还不是1.0版本。。毕竟1.0刚出所以书籍较少。。
- Deep Learning Toolkits II pytorch example
- PyTorch 1.0 Bringing research and production together Presentation
- Deep Learning with PyTorch - Packet Vishnu Subramanian
- PyTorch深度学习实战 - 侯宜军(pdf)
- PyTorch深度学习实战 - 侯宜军(epub)
- 深度学习之Pytorch - 廖星宇
- 深度学习之PyTorch实战计算机视觉 - 唐进民
- pytorch卷积、反卷积 - download from internet
- A brief summary of the PTDC ’18 PyTorch 1.0 Preview and Promise - Hacker Noon
- PyTorch_tutorial_0.0.4_余霆嵩
- 深度学习入门之PyTorch - 廖星宇(有目录)
- 深度学习框架PyTorch:入门与实践 - 陈云
- PyTorch 0.4 中文文档 - 翻译
- pytorch 0.4 - tutorial - 有目录版
- Automatic differentiation in PyTorch - paper
-
什么是PyTorch?(What is PyTorch?)
-
Autograd:自动求导
-
神经网络(Neural Networks)
-
训练分类器(Training a Classifier)
-
选读:数据并行处理(Optional: Data Parallelism)
-
张量(Tensors)
-
自动求导(Autograd)
-
nn
模块(nn
module)
- PyTorch与计算机视觉简要总结
- Slides-newest from Google Drive
- Lecture 01_ Overview.pptx
- Lecture 02_ Linear Model.pptx
- Lecture 03_ Gradient Descent.pptx
- Lecture 04_ Back-propagation and PyTorch autograd.pptx
- Lecture 05_ Linear regression in PyTorch way.pptx
- Lecture 06_ Logistic Regression.pptx
- Lecture 07_ Wide _ Deep.pptx
- Lecture 08_ DataLoader.pptx
- Lecture 09_ Softmax Classifier.pptx
- Lecture 10_ Basic CNN.pptx
- Lecture 11_ Advanced CNN.pptx
- Lecture 12_ RNN.pptx
- Lecture 13_ RNN II.pptx
- Lecture 14_ Seq2Seq.pptx
- Lecture 15_ NSML, Smartest ML Platform.pptx
- Part 1: Introduction to PyTorch and using tensors
- Part 2: Building fully-connected neural networks with PyTorch
- Part 3: How to train a fully-connected network with backpropagation on MNIST
- Part 4: Exercise - train a neural network on Fashion-MNIST
- Part 5: Using a trained network for making predictions and validating networks
- Part 6: How to save and load trained models
- Part 7: Load image data with torchvision, also data augmentation
- Part 8: Use transfer learning to train a state-of-the-art image classifier for dogs and cats
- 1-1-from-anns-to-deep-learning.pdf
- 1-2-current-success.pdf
- 1-3-what-is-happening.pdf
- 1-4-tensors-and-linear-regression.pdf
- 1-5-high-dimension-tensors.pdf
- 1-6-tensor-internals.pdf
- 2-1-loss-and-risk.pdf
- 2-2-overfitting.pdf
- 2-3-bias-variance-dilemma.pdf
- 2-4-evaluation-protocols.pdf
- 2-5-basic-embeddings.pdf
- 3-1-perceptron.pdf
- 3-2-LDA.pdf
- 3-3-features.pdf
- 3-4-MLP.pdf
- 3-5-gradient-descent.pdf
- 3-6-backprop.pdf
- 4-1-DAG-networks.pdf
- 4-2-autograd.pdf
- 4-3-modules-and-batch-processing.pdf
- 4-4-convolutions.pdf
- 4-5-pooling.pdf
- 4-6-writing-a-module.pdf
- 5-1-cross-entropy-loss.pdf
- 5-2-SGD.pdf
- 5-3-optim.pdf
- 5-4-l2-l1-penalties.pdf
- 5-5-initialization.pdf
- 5-6-architecture-and-training.pdf
- 5-7-writing-an-autograd-function.pdf
- 6-1-benefits-of-depth.pdf
- 6-2-rectifiers.pdf
- 6-3-dropout.pdf
- 6-4-batch-normalization.pdf
- 6-5-residual-networks.pdf
- 6-6-using-GPUs.pdf
- 7-1-CV-tasks.pdf
- 7-2-image-classification.pdf
- 7-3-object-detection.pdf
- 7-4-segmentation.pdf
- 7-5-dataloader-and-surgery.pdf
- 8-1-looking-at-parameters.pdf
- 8-2-looking-at-activations.pdf
- 8-3-visualizing-in-input.pdf
- 8-4-optimizing-inputs.pdf
- 9-1-transposed-convolutions.pdf
- 9-2-autoencoders.pdf
- 9-3-denoising-and-variational-autoencoders.pdf
- 9-4-NVP.pdf
- 10-1-GAN.pdf
- 10-2-Wasserstein-GAN.pdf
- 10-3-conditional-GAN.pdf
- 10-4-persistence.pdf
- 11-1-RNN-basics.pdf
- 11-2-LSTM-and-GRU.pdf
- 11-3-word-embeddings-and-translation.pdf
Some code in this repo is separated in blocks using #%%
.
A block is as same as a cell in Jupyter Notebook
. So editors/IDEs supporting this functionality is recommanded.
Such as:
- VSCode with Microsoft Python extension
- Spyder with Anaconda
- PyCharm