littlehacker26
My current research interests include Affective Computing and Controllable Text Generation.
Beijing Institute of TechnologyBeijing
Pinned Repositories
2018-NanJing-AI-Application-Competition
赛题的解题思路描述和项目源代码
ACL2021MF
Source Code For ACL 2021 Paper "Mention Flags (MF): Constraining Transformer-based Text Generators"
adavae
VAE with adaptive parameter-efficient GPT-2s for language modeling
AutoReinforce
自动加固Android App
awesome-emotion-recognition-in-conversations
A comprehensive reading list for Emotion Recognition in Conversations
Awesome-LLM-Synthetic-Data
A reading list on LLM based Synthetic Data Generation 🔥
awesome-phd-advice
Collection of advice for prospective and current PhD students
Discriminator-Cooperative-Unlikelihood-Prompt-Tuning
The code implementation of the EMNLP2022 paper: DisCup: Discriminator Cooperative Unlikelihood Prompt-tuning for Controllable Text Generation
PaperList
记录阅读的paper
Residual_Memory_Transformer
This repository contains code, data, checkpoints, and training and evaluation instructions for the paper: Controllable Text Generation with Residual Memory Transformer
littlehacker26's Repositories
littlehacker26/Discriminator-Cooperative-Unlikelihood-Prompt-Tuning
The code implementation of the EMNLP2022 paper: DisCup: Discriminator Cooperative Unlikelihood Prompt-tuning for Controllable Text Generation
littlehacker26/2018-NanJing-AI-Application-Competition
赛题的解题思路描述和项目源代码
littlehacker26/Residual_Memory_Transformer
This repository contains code, data, checkpoints, and training and evaluation instructions for the paper: Controllable Text Generation with Residual Memory Transformer
littlehacker26/PaperList
记录阅读的paper
littlehacker26/ACL2021MF
Source Code For ACL 2021 Paper "Mention Flags (MF): Constraining Transformer-based Text Generators"
littlehacker26/adavae
VAE with adaptive parameter-efficient GPT-2s for language modeling
littlehacker26/Awesome-LLM-Synthetic-Data
A reading list on LLM based Synthetic Data Generation 🔥
littlehacker26/awesome-phd-advice
Collection of advice for prospective and current PhD students
littlehacker26/baichuan-7B
A large-scale 7B pretraining language model developed by BaiChuan-Inc.
littlehacker26/BIThesis
📖 北京理工大学非官方 LaTeX 模板集合,包含本科、研究生毕业设计模板及更多。🎉 (更多文档请访问 wiki 和 release 中的手册)
littlehacker26/COCON_ICLR2021
Pytorch implementation of CoCon: A Self-Supervised Approach for Controlled Text Generation
littlehacker26/CommonGen
A Constrained Text Generation Challenge Towards Generative Commonsense Reasoning
littlehacker26/DExperts
code associated with ACL 2021 DExperts paper
littlehacker26/diasenti
Conversational Multimodal Emotion Recognition
littlehacker26/gdc
littlehacker26/gorilla
Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)
littlehacker26/HuatuoGPT
HuatuoGPT, Towards Taming Language Models To Be a Doctor. (An Open Medical GPT)
littlehacker26/LLaMA-Factory
Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
littlehacker26/Mengzi
Mengzi Pretrained Models
littlehacker26/naacl-2021-fudge-controlled-generation
littlehacker26/neurologic_decoding
littlehacker26/Non-Residual-Prompting
littlehacker26/OpenRLHF
An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & Mixtral)
littlehacker26/P-tuning
A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
littlehacker26/Paper_Writing_Tips
littlehacker26/PPLM
Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.
littlehacker26/Progressive-Hint
This is the official implementation of "Progressive-Hint Prompting Improves Reasoning in Large Language Models"
littlehacker26/Residual-EBM
Code for Residual Energy-Based Models for Text Generation in PyTorch.
littlehacker26/self-refine
LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.
littlehacker26/transformers
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.