resources about AI, Machine Learning, Deep Learning, Python...etc
Why do we noramalize image by subtracting dataset's image mean
Data Cleaning Challenge: Handling missing values : Rachael Tatman
jieba 結巴 : python 中文斷詞套件
中研院斷詞系統: 需申請
LIDC: 中文詞性、情緒分析
E-HowNet:中文詞性、結構化分析、英文對應
Sinica NLPLab CSentiPackage: Java 文章情緒分析
深度學習中的Data Augmentation方法和代碼實現
The Effectiveness of Data Augmentation in Image Classification using Deep Learning
- N-Gram
- Topic Model
- Convolutional Neural Networks for Sentence Classification
- Best Practices for Document Classification with Deep Learning
- Time Series Classification from Scratch with Deep Neural Networks: A Strong Baseline
- What does 1x1 convolution mean?
- 卷積神經網絡中用1*1 卷積有什麼作用或者好處呢?
- Transposed Convolution, Fractionally Strided Convolution or Deconvolution
- A guide to convolution arithmetic for deep learning: convolution, pooling, stride, transpose convolution
- CS231n: Convolutional Neural Networks for Visual Recognition: Stanford course
- The Unreasonable Effectiveness of Recurrent Neural Networks by Andrej Karpathy
- Understanding LSTM Networks
- LSTM RNN 循環神經網絡(LSTM) by 莫煩
- DNN, CNN, RNN 比較
- Udacity RNN quick introduction
- LSTM Networks - The Math of Intelligence : handcrafted in numpy by Siraj Raval
-
Ian Goodfellow's GAN recommendation list:
-
Progressive GANs: (probably the highest quality images so far)
-
Spectral normalization:(got GANs working on lots of classes, which has been hard)
-
Projection discriminator: (from the same lab as #2, both techniques work well together, overall give very good results with 1000 classes) Here’s the video of putting the two methods together: https://www.youtube.com/watch?time_continue=3&v=r6zZPn-6dPY
-
Are GANs created equal? A big empirical study showing the importance of good rigorous empirical work and how a lot of the GAN variants don’t seem to actually offer improvements in practice
-
WGAN-GP: probably the most popular GAN variant today and seems to be pretty good in my opinion. Caveat: the baseline GAN variants should not perform nearly as badly as this paper claims, especially the text one
-
StackGAN++: High quality text-to-image synthesis with GANs
-
You should be a little bit aware of the “GANs with encoders” space, one of my favorites
-
You should be a little bit aware of the “theory of GAN convergence” space, one of my favorites
-
- Deep RL Bootcamp
- CS 294: Deep Reinforcement Learning, Fall 2017
- DQN從入門到放棄6 DQN的各種改進
- David Silver大神RL
- Applied Deep Learning Machine Learning and Having It Deep and Structured
- 莫煩RL
- Actor-Critic Algorithms
- 李弘毅 A3C
- Denny Britz RL
- DDPG
- Awesome Reinforcement Learning by aikorea
- How to write a reward function by bonsai
- Awesome CV
- A Neural Algorithm of Artistic Style
- Tool for label data
- SSD: Single Shot MultiBox Detector: one stage detector
- Selective Search for Object Recognition
- CS231n Lecture 8 - Localization and Detection
- RCNN算法詳解
- RCNN- 將CNN引入目標檢測的開山之作
- 原始圖片中的ROI如何映射到到feature map?
- Fast RCNN算法詳解
- Fast R-CNN Author Slides
- Kaming He & RGB: ResNet, R-CNN on CVPR 2017
- 目標檢測之RCNN,SPP-NET,Fast-RCNN,Faster-RCNN
- Keras on Faster R-CNN
- How does the region proposal network (RPN) in Faster R-CNN work?
- AP: Average Precision
- Light head R-CNN
- Fully Convolutional Networks for Semantic Segmentation paper
- U-Net: Convolutional Networks for Biomedical Image Segmentation paper
- R-CNN
- Fast R-CNN
- Faster R-CNN
- Mask R-CNN
- Yolo Paper
- Yolo 9000 Paper
- Project Yolo
- YOLO9000: Better, Faster, Stronger論文筆記
- YOLO2 - YAD2K
- Pascal VOC: 20 classes
- Miscrosoft COCO: 80 classes