kamillle's Stars
yt-dlp/yt-dlp
A feature-rich command-line audio/video downloader
wagoodman/dive
A tool for exploring each layer in a docker image
astral-sh/uv
An extremely fast Python package and project manager, written in Rust.
ml-explore/mlx
MLX: An array framework for Apple silicon
camenduru/stable-diffusion-webui-colab
stable diffusion webui colab
facebookresearch/segment-anything-2
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
NVIDIA/TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
LargeWorldModel/LWM
aws/karpenter-provider-aws
Karpenter is a Kubernetes Node Autoscaler built for flexibility, performance, and simplicity.
intel-analytics/ipex-llm
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
NVIDIA-AI-IOT/torch2trt
An easy to use PyTorch to TensorRT converter
guardrails-ai/guardrails
Adding guardrails to large language models.
nanomsg/nng
nanomsg-next-generation -- light-weight brokerless messaging
kserve/kserve
Standardized Serverless ML Inference Platform on Kubernetes
nucleuscloud/neosync
Open source data anonymization and synthetic data orchestration for developers. Create high fidelity synthetic data and sync it across your environments.
intel/pcm
Intel® Performance Counter Monitor (Intel® PCM)
haoheliu/AudioLDM
AudioLDM: Generate speech, sound effects, music and beyond, with text.
Rikorose/DeepFilterNet
Noise supression using deep filtering
CjangCjengh/MoeGoe
Executable file for VITS inference
flydelabs/flyde
🌟 Open-source, visual programming for developers. Includes a VS Code extension, integrates with existing TypeScript code, browser and Node.js.
intel/intel-extension-for-pytorch
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
cloudflare/orange
llm-jp/awesome-japanese-llm
日本語LLMまとめ - Overview of Japanese LLMs
Canop/dysk
A linux utility to get information on filesystems, like df but better
p0p4k/vits2_pytorch
unofficial vits2-TTS implementation in pytorch
marocchino/sticky-pull-request-comment
create comment on pull request, if exists update that comment.
GoogleCloudPlatform/gcping
The source for the CLI and web app at gcping.com
Anush008/fastembed-rs
Library for generating vector embeddings, reranking in Rust
aws-samples/awsome-distributed-training
Collection of best practices, reference architectures, model training examples and utilities to train large models on AWS.
aws-solutions-library-samples/guidance-for-machine-learning-inference-on-aws
This Guidance demonstrates how to deploy a machine learning inference architecture on Amazon Elastic Kubernetes Service (Amazon EKS). It addresses the basic implementation requirements as well as ways you can pack thousands of unique PyTorch deep learning (DL) models into a scalable architecture and evaluate performance at scale