Pinned Repositories
animate-anything
Fine-Grained Open Domain Image Animation with Motion Guidance
audio2photoreal
Code and dataset for photorealistic Codec Avatars driven from audio
audiocraft
Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.
autotrain-advanced
🤗 AutoTrain Advanced
awesome-diffusion-categorized
collection of diffusion model papers categorized by their subareas
champ
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
Depth-Anything
[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
DynamiCrafter
DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors
Emote-hack
using chatgpt (now Claude 3) to reverse engineer code from Emote white paper. WIP
Moore-AnimateAnyone
MoonEese's Repositories
MoonEese/awesome-diffusion-categorized
collection of diffusion model papers categorized by their subareas
MoonEese/Moore-AnimateAnyone
MoonEese/animate-anything
Fine-Grained Open Domain Image Animation with Motion Guidance
MoonEese/audio2photoreal
Code and dataset for photorealistic Codec Avatars driven from audio
MoonEese/audiocraft
Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.
MoonEese/autotrain-advanced
🤗 AutoTrain Advanced
MoonEese/champ
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
MoonEese/Depth-Anything
[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
MoonEese/DynamiCrafter
DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors
MoonEese/Emote-hack
using chatgpt (now Claude 3) to reverse engineer code from Emote white paper. WIP
MoonEese/grok-1
Grok open release
MoonEese/InstantID
InstantID : Zero-shot Identity-Preserving Generation in Seconds 🔥
MoonEese/OOTDiffusion
Official implementation of OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on
MoonEese/PaddleSeg
Easy-to-use image segmentation library with awesome pre-trained model zoo, supporting wide-range of practical tasks in Semantic Segmentation, Interactive Segmentation, Panoptic Segmentation, Image Matting, 3D Segmentation, etc.
MoonEese/roop-unleashed
Evolved Fork of roop with Web Server and lots of additions
MoonEese/score_sde
Official code for Score-Based Generative Modeling through Stochastic Differential Equations (ICLR 2021, Oral)
MoonEese/GPT4V-Image-Captioner
MoonEese/llama-recipes
Scripts for fine-tuning Llama2 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization & question answering. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment.Demo apps to showcase Llama2 for WhatsApp & Messenger
MoonEese/llama3
The official Meta Llama 3 GitHub site
MoonEese/OneTrainer
OneTrainer is a one-stop solution for all your stable diffusion training needs.
MoonEese/Open-Sora
Open-Sora: Democratizing Efficient Video Production for All
MoonEese/Prompt-Engineering-Guide
🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
MoonEese/Real-Time-Voice-Cloning
Clone a voice in 5 seconds to generate arbitrary speech in real-time
MoonEese/SadTalker-Video-Lip-Sync
本项目基于SadTalkers实现视频唇形合成的Wav2lip。通过以视频文件方式进行语音驱动生成唇形,设置面部区域可配置的增强方式进行合成唇形(人脸)区域画面增强,提高生成唇形的清晰度。使用DAIN 插帧的DL算法对生成视频进行补帧,补充帧间合成唇形的动作过渡,使合成的唇形更为流畅、真实以及自然。
MoonEese/StableVITON
[CVPR2024] StableVITON: Learning Semantic Correspondence with Latent Diffusion Model for Virtual Try-On
MoonEese/StreamDiffusion
StreamDiffusion: A Pipeline-Level Solution for Real-Time Interactive Generation
MoonEese/SyncTalk
[CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"
MoonEese/TTS
🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
MoonEese/VBench
[CVPR2024] VBench: Comprehensive Benchmark Suite for Video Generative Models
MoonEese/Wav2Lip_realtime_facetime
This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs