alfredplpl
Research Scientist. Interests: data science, machine learning, robotics, neuroscience
CyberAgent, incJapan
alfredplpl's Stars
ChenHsing/Awesome-Video-Diffusion-Models
[CSUR] A Survey on Video Diffusion Models
boomb0om/text2image-benchmark
Benchmark for generative image models
JunyaoHu/common_metrics_on_video_quality
You can easily calculate FVD, PSNR, SSIM, LPIPS for evaluating the quality of generated or predicted videos.
songweige/content-debiased-fvd
[CVPR 2024] On the Content Bias in Fréchet Video Distance
NVlabs/Minitron
A family of compressed models obtained via pruning and knowledge distillation
FYGitHub1009/Multi-Fractal-Dataset
1st-place-solution at Modules for Generating Pre-training Image Datasets Contest
ShuhongChen/vroid_renderer
CVPR 2023: PAniC-3D, rendering
ShuhongChen/vroid-dataset
CVPR 2023: PAniC-3D, Vroid dataset downloader
THUDM/CogVideo
text and image to video generation: CogVideoX (2024) and CogVideo (ICLR 2023)
Spawning-Inc/datadiligence
Respect generative AI opt-outs in your ML training pipeline.
NVIDIA/Megatron-LM
Ongoing research training transformer models at scale
UuuNyaa/blender_motion_generate_tools
motion_generate_tools is a Blender addon for generate motion using MDM: Human Motion Diffusion Model.
city96/ComfyUI-GGUF
GGUF Quantization support for native ComfyUI models
ostris/ai-toolkit
Various AI scripts. Mostly Stable Diffusion stuff.
huggingface/optimum-quanto
A pytorch quantization backend for optimum
apple/ml-mdm
Train high-quality text-to-image diffusion models in a data & compute efficient manner
THUDM/ImageReward
[NeurIPS 2023] ImageReward: Learning and Evaluating Human Preferences for Text-to-image Generation
Alpha-VLLM/Lumina-mGPT
Official Implementation of "Lumina-mGPT: Illuminate Flexible Photorealistic Text-to-Image Generation with Multimodal Generative Pretraining"
cloneofsimo/minRF
Minimal implementation of scalable rectified flow transformers, based on SD3's approach
gnobitab/RectifiedFlow
Official Implementation of Rectified Flow (ICLR2023 Spotlight)
Azure/MS-AMP
Microsoft Automatic Mixed Precision Library
huggingface/accelerate
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
NVIDIA/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
opensource-jp/Open-Source-AI
Japanese translation of Open Source AI Definition
black-forest-labs/flux
Official inference repo for FLUX.1 models
metmuseum/openaccess
The Metropolitan Museum of Art's Open Access Initiative
facebookresearch/sam2
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
liaopeiyuan/artbench
Benchmarking Generative Models with Artworks
thu-ml/low-bit-optimizers
Low-bit optimizers for PyTorch
lllyasviel/stable-diffusion-webui-forge