Pinned Repositories
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
O-LoRA
CBLIP2
CTFL-Beta
The code for paper: Contrastive time–frequency learning for radar signal sorting
cudnn_repos
LPCL_beta
LPCL paper code(beta, not collated)
mPLUG-Owl
mPLUG-Owl🦉: Modularization Empowers Large Language Models with Multimodality
NSExperiment
Youngluc's Repositories
Youngluc/CTFL-Beta
The code for paper: Contrastive time–frequency learning for radar signal sorting
Youngluc/CBLIP2
Youngluc/mPLUG-Owl
mPLUG-Owl🦉: Modularization Empowers Large Language Models with Multimodality
Youngluc/cudnn_repos
Youngluc/LPCL_beta
LPCL paper code(beta, not collated)
Youngluc/NSExperiment