Pinned Repositories
cascaded-generation
Cascaded Text Generation with Markov Transformers
CBET-dataset
Cleaned Balanced Emotional Tweets (CBET) Dataset
chenyangh
CNAT
Non-autoregressive Translation by Learning Target Categorical Codes
DialogueGenerationWithEmotion
Response generation giving specific emotion.
DSLP
Deeply Supervised, Layer-wise Prediction-aware (DSLP) Transformer for Non-autoregressive Neural Machine Translation
SemEval2019Task3
Code for ANA at SemEval-2019 Task 3
SentimentAnalysis
This project aims at learning sentiment features from user reviews
Seq2Emo
Sequence to multi-label emotion classification
TAG-waTer_pAper_nameoloGy
TAG: waTer pAper nameoloGy
chenyangh's Repositories
chenyangh/DSLP
Deeply Supervised, Layer-wise Prediction-aware (DSLP) Transformer for Non-autoregressive Neural Machine Translation
chenyangh/chenyangh
chenyangh/CNAT
Non-autoregressive Translation by Learning Target Categorical Codes
chenyangh/Emotionator
The best emotion detector on this planet 🌍
chenyangh/mmsegmentation
OpenMMLab Semantic Segmentation Toolbox and Benchmark.
chenyangh/sacrebleu
Reference BLEU implementation that auto-downloads test sets and reports a version string to facilitate cross-lab comparisons
chenyangh/U-2-Net
The code for our newly accepted paper in Pattern Recognition 2020: "U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection."
chenyangh/vision_transformer
chenyangh/Awesome-Text-Diffusion-Models
[IJCAI'23] The official Github page of the paper "Diffusion Models for Non-autoregressive Text Generation: A Survey".
chenyangh/BANG
BANG is a new pretraining model to Bridge the gap between Autoregressive (AR) and Non-autoregressive (NAR) Generation. AR and NAR generation can be uniformly regarded as to what extent previous tokens can be attended, and BANG bridges AR and NAR generation by designing a novel model structure for large-scale pretraining. The pretrained BANG model can simultaneously support AR, NAR and semi-NAR generation to meet different requirements.
chenyangh/benchmarks
Collection of benchmarks written by researchers at Amii
chenyangh/chenyangh.github.io
A beautiful, simple, clean, and responsive Jekyll theme for academics
chenyangh/CMLMC
Code for the ICLR'22 paper "Improving Non-Autoregressive Translation Models Without Distillation"
chenyangh/DAT-Length-Control
chenyangh/eflomal
Efficient Low-Memory Aligner
chenyangh/esm
Evolutionary Scale Modeling (esm): Pretrained language models for proteins
chenyangh/fairseq-1
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
chenyangh/fairseq2
FAIR Sequence Modeling Toolkit 2
chenyangh/Fully-NAT
chenyangh/giza-py
A simple, Python-based, command-line runner for MGIZA++.
chenyangh/GLUE-baselines
[DEPRECATED] Repo for exploring multi-task learning approaches to learning sentence representations
chenyangh/GMA
Code for ACL 2022 findings paper "Gaussian Multi-head Attention for Simultaneous Machine Translation"
chenyangh/MoE-Waitk
Code for EMNLP 2021 oral paper "Universal Simultaneous Machine Translation with Mixture-of-Experts Wait-k Policy"
chenyangh/NAG-BERT
[EACL'21] Non-Autoregressive with Pretrained Language Model
chenyangh/OTTAWA
chenyangh/POINTER
chenyangh/pytorch-struct
Fast, general, and tested differentiable structured prediction in PyTorch
chenyangh/REDER
[NeurIPS 2021] Duplex Sequence-to-Sequence Learning for Reversible Machine Translation
chenyangh/tensor2struct-public
Semantic parsers based on encoder-decoder framework
chenyangh/terashuf
terashuf shuffles multi-terabyte text files using limited memory