Pinned Repositories
rPPG-Toolbox
rPPG-Toolbox: Deep Remote PPG Toolbox (NeurIPS 2023)
angel_system
arxiv-collector
A little Python script to collect LaTeX sources for upload to the arXiv.
Homekit2020
LLMs-and-Probabilistic-Reasoning
Data and software artifacts for the EMNLP 2024 (Main) paper "What Are the Odds? Language Models Are Capable of Probabilistic Reasoning"
MA-rPPG-Video-Toolbox
The source code and pre-trained models for Motion Matters: Neural Motion Transfer for Better Camera Physiological Sensing (WACV 2024, Oral).
PPSNet
PPSNet: Leveraging Near-Field Lighting for Monocular Depth Estimation from Endoscopy Videos (ECCV, 2024)
rPPG-Toolbox
yahskapar.github.io
yahskapar's Repositories
yahskapar/MA-rPPG-Video-Toolbox
The source code and pre-trained models for Motion Matters: Neural Motion Transfer for Better Camera Physiological Sensing (WACV 2024, Oral).
yahskapar/PPSNet
PPSNet: Leveraging Near-Field Lighting for Monocular Depth Estimation from Endoscopy Videos (ECCV, 2024)
yahskapar/LLMs-and-Probabilistic-Reasoning
Data and software artifacts for the EMNLP 2024 (Main) paper "What Are the Odds? Language Models Are Capable of Probabilistic Reasoning"
yahskapar/Homekit2020
yahskapar/rPPG-Toolbox
yahskapar/yahskapar.github.io
yahskapar/angel_system
yahskapar/bark-with-voice-clone
🔊 Text-prompted Generative Audio Model - With the ability to clone voices
yahskapar/C3VD
Colonoscopy 3D Video Dataset (C3VD) acquired with a high definition clinical colonoscope and high-fidelity colon models for benchmarking computer vision methods in colonoscopy.
yahskapar/DROID-SLAM
yahskapar/dso
Direct Sparse Odometry
yahskapar/FactorMatte
yahskapar/first-order-model
This repository contains the source code for the paper First Order Motion Model for Image Animation
yahskapar/CHA
Conversational Health Agents: A Personalized LLM-powered Agent Framework
yahskapar/colmap
COLMAP - Structure-from-Motion and Multi-View Stereo
yahskapar/control-a-video
Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"
yahskapar/ControlNet
Let us control diffusion models!
yahskapar/Depth-Anything
[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
yahskapar/HIPIE
Code release for "Hierarchical Open-vocabulary Universal Image Segmentation"
yahskapar/insightface
State-of-the-art 2D and 3D Face Analysis Project
yahskapar/Marigold
Marigold: Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation
yahskapar/MVPSNet
yahskapar/NLUT
code for NLUT: Neural-based 3D Lookup Tables for Video Photorealistic Style Transfer
yahskapar/projectaria_eyetracking
Project Aria Social Eye Tracking Model
yahskapar/RobustVideoMatting
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!
yahskapar/SadTalker
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
yahskapar/Segment-and-Track-Anything
An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) for key-frame segmentation and Associating Objects with Transformers (AOT) for efficient tracking and propagation purposes.
yahskapar/SemanticSegmentation
A framework for training segmentation models in pytorch on labelme annotations with pretrained examples of skin, cat, and pizza topping segmentation
yahskapar/tandem
[CoRL 21'] TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo
yahskapar/WCT2
Software that can perform photorealistic style transfer without the need of any post-processing steps.