Pinned Repositories
detect_mask
dotfiles-Q
Laughing-q's dotfile
evaluate-and-plot_score
lqcv
Laughing-q Computer Vision Foundation
MTCNN_pytorch
MTCNN by pytorch
nvim
Laughing-q's nvim config
xinye_competition
yolov5_annotations
annotations of yolov5-5.0
ultralytics
Ultralytics YOLO11 🚀
yolov5
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
Laughing-q's Repositories
Laughing-q/nvim
Laughing-q's nvim config
Laughing-q/dotfiles-Q
Laughing-q's dotfile
Laughing-q/lqcv
Laughing-q Computer Vision Foundation
Laughing-q/JSON2YOLO
Convert JSON annotations into YOLO format.
Laughing-q/lf.nvim
Lf file manager for Neovim (in Lua)
Laughing-q/YOLO-World
Real-Time Open-Vocabulary Object Detection
Laughing-q/AdelaiDet
AdelaiDet is an open source toolbox for multiple instance-level detection and recognition tasks.
Laughing-q/assets
Laughing-q/dmscripts
Fork of Derek Taylor (DistroTube)'s colorscripts on Gitlab
Laughing-q/dwm
Laughing-q's build of dwm, modified from Luke Smith
Laughing-q/Grounded-Segment-Anything
Marrying Grounding DINO with Segment Anything & Stable Diffusion & BLIP & Whisper - Automatically Detect , Segment and Generate Anything with Image, Text, and Speech Inputs
Laughing-q/GroundingDINO
The official implementation of "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
Laughing-q/HDTF
the dataset and code for "Flow-guided One-shot Talking Face Generation with a High-resolution Audio-visual Dataset"
Laughing-q/labelImg
LabelImg is now part of the Label Studio community. The popular image annotation tool created by Tzutalin is no longer actively being developed, but you can check out Label Studio, the open source data labeling tool for images, text, hypertext, audio, video and time-series data.
Laughing-q/MetricTrainer
Laughing-q/MyVim
Laughing-q/pytorch-image-models
PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, EfficientNetV2, NFNet, Vision Transformer, MixNet, MobileNet-V3/V2, RegNet, DPN, CSPNet, and more
Laughing-q/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Laughing-q/SparseInst
SparseInst: Sparse Instance Activation for Real-Time Instance Segmentation, CVPR 2022
Laughing-q/st
Laughing-q's st, fork from Luke Smith.
Laughing-q/sxiv
Simple X Image Viewer
Laughing-q/tokyonight.nvim
🏙 A clean, dark Neovim theme written in Lua, with support for lsp, treesitter and lots of plugins. Includes additional themes for Kitty, Alacritty, iTerm and Fish.
Laughing-q/ultralytics
YOLOv8 🚀 in PyTorch > ONNX > CoreML > TFLite
Laughing-q/wallpapers
Laughing-q/Wav2Lip
This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020.
Laughing-q/yolov5-exp
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
Laughing-q/yolov5-face
YOLO5Face: Why Reinventing a Face Detector (https://arxiv.org/abs/2105.12931)
Laughing-q/YOLOv6
YOLOv6: a single-stage object detection framework dedicated to industrial application.
Laughing-q/yolov7
Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
Laughing-q/zsh