Pinned Repositories
alpaca-lora
Instruct-tune LLaMA on consumer hardware
android-compose-codelabs
android-demo
BLIP
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
chilloutmix-ni_from_huggingface.co_swl-models
Chinese-alpaca-lora
骆驼:A Chinese finetuned instruction LLaMA. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技
CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
CodeFormer
[NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer
deep-learning
ffi
Utilities for working with Foreign Function Interface (FFI) code