Pinned Repositories
CLIP
Contrastive Language-Image Pretraining
language-resources
Datasets and tools for basic natural language processing.
LoRA
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
mcan-vqa
Deep Modular Co-Attention Networks for Visual Question Answering
mmnas
Deep Multimodal Neural Architecture Search
MOSS
An open-source tool-augmented conversational language model from Fudan University
openvqa
A lightweight, scalable, and general framework for visual question answering (VQA) research
PFNN
Phase-Functioned Neural Networks for Character Control
pytorch_pretrained_BERT
UNITER
Research code for ECCV 2020 paper "UNITER: UNiversal Image-TExt Representation Learning"
cuiyuhao1996's Repositories
cuiyuhao1996/mcan-vqa
Deep Modular Co-Attention Networks for Visual Question Answering
cuiyuhao1996/pytorch_pretrained_BERT
cuiyuhao1996/UNITER
Research code for ECCV 2020 paper "UNITER: UNiversal Image-TExt Representation Learning"
cuiyuhao1996/CLIP
Contrastive Language-Image Pretraining
cuiyuhao1996/language-resources
Datasets and tools for basic natural language processing.
cuiyuhao1996/LoRA
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
cuiyuhao1996/mmnas
Deep Multimodal Neural Architecture Search
cuiyuhao1996/MOSS
An open-source tool-augmented conversational language model from Fudan University
cuiyuhao1996/openvqa
A lightweight, scalable, and general framework for visual question answering (VQA) research
cuiyuhao1996/PFNN
Phase-Functioned Neural Networks for Character Control
cuiyuhao1996/rnnoise
Recurrent neural network for audio noise reduction
cuiyuhao1996/rosita
ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration
cuiyuhao1996/TimeSformer
The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"
cuiyuhao1996/transformers
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.