aurora95's Stars
Significant-Gravitas/AutoGPT
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
ChatGPTNextWeb/ChatGPT-Next-Web
A cross-platform ChatGPT/Gemini UI (Web / PWA / Linux / Win / MacOS). 一键拥有你自己的跨平台 ChatGPT/Gemini 应用。
CompVis/stable-diffusion
A latent text-to-image diffusion model
facebookresearch/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
apache/arrow
Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
BlinkDL/RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
state-spaces/mamba
Mamba SSM architecture
CompVis/latent-diffusion
High-Resolution Image Synthesis with Latent Diffusion Models
lucidrains/denoising-diffusion-pytorch
Implementation of Denoising Diffusion Probabilistic Model in Pytorch
codertimo/BERT-pytorch
Google AI 2018 BERT pytorch implementation
hojonathanho/diffusion
Denoising Diffusion Probabilistic Models
GuyTevet/motion-diffusion-model
The official PyTorch implementation of the paper "Human Motion Diffusion Model"
YangLing0818/Diffusion-Models-Papers-Survey-Taxonomy
Diffusion model papers, survey, and taxonomy
pengzhiliang/MAE-pytorch
Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners
google/neural-tangents
Fast and Easy Infinite Neural Networks in Python
cvxgrp/cvxpylayers
Differentiable convex optimization layers
VainF/Awesome-Anything
General AI methods for Anything: AnyObject, AnyGeneration, AnyModel, AnyTask, AnyX
facebookresearch/theseus
A library for differentiable nonlinear optimization
chq1155/A-Survey-on-Generative-Diffusion-Model
nv-tlabs/ASE
MarkMoHR/Awesome-Referring-Image-Segmentation
:books: A collection of papers about Referring Image Segmentation.
locuslab/optnet
OptNet: Differentiable Optimization as a Layer in Neural Networks
SysCV/pcan
Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation, NeurIPS 2021 Spotlight
RUCAIBox/PLMPapers
A paper list of pre-trained language models (PLMs).
heyuanYao-pku/Control-VAE
ziplab/Mesa
This is the official PyTorch implementation for "Mesa: A Memory-saving Training Framework for Transformers".
ruosongwang/CNTK
Convolutional Neural Tangent Kernel
alexpashevich/E.T.
Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal transformer that encodes language inputs and the full episode history of visual observations and actions.
LeoYu/neural-tangent-kernel-UCI
Testing Nerual Tangent Kernel (NTK) on small UCI datasets
zxy556677/EasyGen
The official code for paper "EasyGen: Easing Multimodal Generation with a Bidirectional Conditional Diffusion Model and LLMs"