leejason's Stars
google-research/bert
TensorFlow code and pre-trained models for BERT
sebastianruder/NLP-progress
Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.
UKPLab/sentence-transformers
State-of-the-Art Text Embeddings
jina-ai/clip-as-service
🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
google-research/text-to-text-transfer-transformer
Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
zihangdai/xlnet
XLNet: Generalized Autoregressive Pretraining for Language Understanding
google-deepmind/graph_nets
Build Graph Nets in Tensorflow
naganandy/graph-based-deep-learning-literature
links to conference publications in graph-based deep learning
williamleif/GraphSAGE
Representation learning on large graphs using stochastic graph convolutions.
minimaxir/gpt-2-simple
Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts
stellargraph/stellargraph
StellarGraph - Machine Learning on Graphs
CyberZHG/keras-bert
Implementation of BERT that could load official pre-trained models for feature extraction and prediction
salesforce/ctrl
Conditional Transformer Language Model for Controllable Generation
imcaspar/gpt2-ml
GPT2 for Multiple Languages, including pretrained models. GPT2 多语言支持, 15亿参数中文预训练模型
ConnorJL/GPT2
An implementation of training for GPT2, supports TPUs
yao8839836/text_gcn
Graph Convolutional Networks for Text Classification. AAAI 2019
brightmart/bert_language_understanding
Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN
rowanz/grover
Code for Defending Against Neural Fake News, https://rowanzellers.com/grover/
tkipf/keras-gcn
Keras implementation of Graph Convolutional Networks
brightmart/sentiment_analysis_fine_grain
Multi-label Classification with BERT; Fine Grained Sentiment Analysis from AI challenger
transformerlab/transformerlab-app
Open Source Application for Advanced LLM Engineering: interact, train, fine-tune, and evaluate large language models on your own computer.
shawwn/gpt-2
Code for the paper "Language Models are Unsupervised Multitask Learners"
jiehsheng/PatentTransformer
Transformer models for Augmented Inventing
markshope/conference-materials
Conference Materials
markshope/nctu-workshop
Slides, links, and tutorials