Pinned Repositories
18S103195
基于java实现的国际象棋和围棋
acl-style-files
Official style files for papers submitted to venues of the Association for Computational Linguistics
AdderNet
Code for paper " AdderNet: Do We Really Need Multiplications in Deep Learning?"
byol-pytorch
Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch
capsule_text_classification
CSDS
This is the official repo for paper "CSDS: A Fine-grained Chinese Dataset for Customer Service Dialogue Summarization", accepted by EMNLP 2021
fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
inf_data_quality_control
Mask-Predict
A masked language modeling objective to train a model to predict any subset of the target words, conditioned on both the input text and a partially masked target translation.
Thread
jingmu123's Repositories
jingmu123/18S103195
基于java实现的国际象棋和围棋
jingmu123/Thread
jingmu123/acl-style-files
Official style files for papers submitted to venues of the Association for Computational Linguistics
jingmu123/AdderNet
Code for paper " AdderNet: Do We Really Need Multiplications in Deep Learning?"
jingmu123/byol-pytorch
Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch
jingmu123/capsule_text_classification
jingmu123/CSDS
This is the official repo for paper "CSDS: A Fine-grained Chinese Dataset for Customer Service Dialogue Summarization", accepted by EMNLP 2021
jingmu123/fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
jingmu123/inf_data_quality_control
jingmu123/Mask-Predict
A masked language modeling objective to train a model to predict any subset of the target words, conditioned on both the input text and a partially masked target translation.
jingmu123/mosesdecoder
Moses, the machine translation system
jingmu123/Multi_Learning
jingmu123/noah-research
Noah Research
jingmu123/NXT-Switchboard-Disfluency-Parser
A parsing tool to extract converstations from the NXT-Switchboard corpus with disfluency annotations.
jingmu123/OpenKE
An Open-Source Package for Knowledge Embedding (KE)
jingmu123/Pretrained-Language-Model_cemat
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
jingmu123/XLM
PyTorch original implementation of Cross-lingual Language Model Pretraining.