yubinml2019's Stars
CyC2018/CS-Notes
:books: 技术面试必备基础知识、Leetcode、计算机操作系统、计算机网络、系统设计
microsoft/SynapseML
Simple and Distributed Machine Learning
wzhe06/Ad-papers
Papers on Computational Advertising
huawei-noah/Pretrained-Language-Model
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
danielfrg/word2vec
Python interface to Google word2vec
ChineseGLUE/ChineseGLUE
Language Understanding Evaluation benchmark for Chinese: datasets, baselines, pre-trained models,corpus and leaderboard
PrincetonML/SIF
sentence embedding by Smooth Inverse Frequency weighting scheme
DeepGraphLearning/RecommenderSystems
tensorflow/java
Java bindings for TensorFlow
TobiasLee/Text-Classification
Implementation of papers for text classification task on DBpedia
jc-LeeHub/Recommend-System-tf2.0
原理解析及代码实战,推荐算法也可以很简单 🔥 想要系统的学习推荐算法的小伙伴,欢迎 Star 或者 Fork 到自己仓库进行学习🚀 有任何疑问欢迎提 Issues,也可加文末的联系方式向我询问!
PacktPublishing/TensorFlow-Machine-Learning-Cookbook
Code repository for TensorFlow Machine Learning Cookbook by Packt
gy910210/rnn-from-scratch
Implementing Recurrent Neural Network from Scratch
RandolphVI/Hierarchical-Multi-Label-Text-Classification
The code of CIKM'19 paper《Hierarchical Multi-label Text Classification: An Attention-based Recurrent Network Approach》
qiangsiwei/bert_distill
BERT distillation(基于BERT的蒸馏实验 )
xwzhong/papernote
paper note, including personal comments, introduction, code etc
HoyTta0/KnowledgeDistillation
Knowledge distillation in text classification with pytorch. 知识蒸馏,中文文本分类,教师模型BERT、XLNET,学生模型biLSTM。
yanwii/ChineseNER
基于Bi-GRU + CRF 的中文机构名、人名识别, 支持google bert模型
tacchinotacchi/distil-bilstm
Scripts to train a bidirectional LSTM with knowledge distillation from BERT
digix2020/digix2020_ctr_rank1
华为digix算法大赛2020机器学习赛道-ctr预估初赛/决赛rank1
zhengwsh/text-classification
Tensorflow-implemented text classificators including FastText, TextCNN, TextRNN, TextBiRNN, TextRCNN, HAN, etc.
realcactus/bert
TensorFlow code and pre-trained models for BERT
Edy-Barraza/Transformer_Distillation
Knowledge Distillation For Transformer Language Models
dongxiaohuang/TextClassifier_Transformer
个人基于谷歌开源的BERT编写的文本分类器(基于微调方式),可自由加载NLP领域知名的预训练语言模型BERT、Bert-wwm、Roberta、ALBert以及ERNIE1.0
Leon0427/DCN
build up a deep & cross network in tensorflow from scratch 从头开始搭建一个 deep and cross网络
yanqiuxia/BERT-PreTrain
不用tensorflow estimator,分别采用字mask和wwm mask在中文领域内finetune bert模型
xv44586/Knowledge-Distillation-NLP
some demos of Knowledge Distillation in NLP
demonSong/xgboost4j
Xgboost for java
lvyufeng/keras_text_sum
a keras implement for seq2seq text summarization method
avinashsai/Attention-based-CNN-for-sentence-classification
Implementation of Attention based CNN for sentence classification https://www.isca-speech.org/archive/Interspeech_2016/pdfs/0354.PDF