Pinned Repositories
allennlp
An open-source NLP research library, built on PyTorch.
allennlp-models
Officially supported AllenNLP models
AutoCompressors
Adapting Language Models to Compress Long Contexts
chap_amr_parser
CoLT5-attention
Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch
neural_based_dmv
nner_as_parsing
seq2seq_with_qcfg
struct-vat
VLGAE
Official Implementation for CVPR 2022 paper "Unsupervised Vision-Language Parsing: Seamlessly Bridging Visual Scene Graphs with Language Structures via Dependency Relationships"
LouChao98's Repositories
LouChao98/VLGAE
Official Implementation for CVPR 2022 paper "Unsupervised Vision-Language Parsing: Seamlessly Bridging Visual Scene Graphs with Language Structures via Dependency Relationships"
LouChao98/nner_as_parsing
LouChao98/seq2seq_with_qcfg
LouChao98/chap_amr_parser
LouChao98/struct-vat
LouChao98/AutoCompressors
Adapting Language Models to Compress Long Contexts
LouChao98/CoLT5-attention
Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch
LouChao98/Diffusion-LM
Diffusion-LM
LouChao98/fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
LouChao98/graph_ensemble_learning
Graph Ensemble Learning
LouChao98/dynamic-sparse-flash-attention
LouChao98/easy-oa
Chrome extension for OA sites like arxiv, openreivew: 1. PDF back to abstract page, 2. Rename PDF page with paper title.
LouChao98/easy-to-hard
Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision
LouChao98/GaLore
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
LouChao98/lambeq
A high-level Python library for Quantum Natural Language Processing
LouChao98/landmark-attention
Landmark Attention: Random-Access Infinite Context Length for Transformers
LouChao98/lightning
Build and train PyTorch models and connect them to the ML lifecycle using Lightning App templates, without handling DIY infrastructure, cost management, scaling, and other headaches.
LouChao98/llama-moe
⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training
LouChao98/lp-sparsemap
LP-SparseMAP: Differentiable sparse structured prediction in coarse factor graphs
LouChao98/mamba
LouChao98/non_neg
Official Code for ICLR 2024 Paper: Non-negative Contrastive Learning
LouChao98/parameter-efficient-moe
LouChao98/parserllm
Use context-free grammars with an LLM
LouChao98/Permutational-Context-Windows
LouChao98/picard
PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models. PICARD is a ServiceNow Research project that was started at Element AI.
LouChao98/Pushdown-Layers
Code for Pushdown Layers from our EMNLP 2023 paper
LouChao98/rnng-pytorch
LouChao98/stack-attention
Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"
LouChao98/transformer_grammars
Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale, TACL (2022)
LouChao98/vqtree