Pinned Repositories
1xN
1xN Block Pattern for Network Sparsity
AAL-pruning
Filter Pruning for Deep Convolutional Neural Networks via Auxiliary Attention
DW
A Dual Weighting Label Assignment Scheme for Object Detection
DyRep
Official implementation for paper "DyRep: Bootstrapping Training with Dynamic Re-parameterization", CVPR 2022
GASN
A Novel Guided Anchor Siamese Network for Arbitrary Target-Of-Interest Tracking in Video-SAR
LPNet-PyTorch
This repository is a PyTorch version of the paper "Luminance-aware Pyramid Network for Low-light Image Enhancement" (TMM 2020).
ResamplingNet
ResamplingNet: End-to-End Adaptive Feature Resampling Network for Real-Time Aerial Tracking
Restoring-Extremely-Dark-Images-In-Real-Time
The project is the official implementation of our CVPR 2021 paper, "Restoring Extremely Dark Images in Real Time"
StreamYOLO
Real-time Object Detection for Streaming Perception, CVPR 2022
Ultra-Fast-Lane-Detection-v2-plus
based on ufld-v2
scott-mao's Repositories
scott-mao/ASSL
[NeurIPS'21 Spotlight] PyTorch code for our paper "Aligned Structured Sparsity Learning for Efficient Image Super-Resolution"
scott-mao/BEVDet
Official code base for BEVDet.
scott-mao/chineseocr_lite
超轻量级中文ocr,支持竖排文字识别, 支持ncnn、mnn、tnn推理 ( dbnet(1.8M) + crnn(2.5M) + anglenet(378KB)) 总模型仅4.7M
scott-mao/CHIP_NeurIPS2021
Code for CHIP: CHannel Independence-based Pruning for Compact Neural Networks (NeruIPS 2021).
scott-mao/DFDAFuse
The source code of paper:DFDAFuse: An Infrared and Visible ImageFusion Network Using Densely Multi-ScaleFeature Extraction and Dual Attention
scott-mao/FAIG
NeurIPS 2021, Spotlight, Finding Discriminative Filters for Specific Degradations in Blind Super-Resolution
scott-mao/GFPGAN
GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration.
scott-mao/Image-Adaptive-YOLO
Image-Adaptive YOLO for Object Detection in Adverse Weather Conditions
scott-mao/JoJoGAN
Official PyTorch repo for JoJoGAN: One Shot Face Stylization
scott-mao/LESA
Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms
scott-mao/LLKD
Learning Lightweight Low-Light Enhancement Network using Pseudo Well-Exposed Images
scott-mao/mmpose
OpenMMLab Pose Estimation Toolbox and Benchmark.
scott-mao/MobileDetBenchmark
Mobile Detection Benchmark
scott-mao/pytorch-optimizer
torch-optimizer -- collection of optimizers for Pytorch
scott-mao/RUAS
This is the official code for the paper "Learning with Nested Scene Modeling and Cooperative Architecture Search for Low-Light Vision"
scott-mao/SCL-LLE
SCL-LLE code
scott-mao/Torch-Pruning
Pruning channels for model acceleration
scott-mao/U-2-Net
The code for our newly accepted paper in Pattern Recognition 2020: "U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection."
scott-mao/ACLL-CNN
Auto-Curation Low-Light Convolutional Neural Network
scott-mao/Attention-mechanism
scott-mao/Bread
Official implementation for "Low-light Image Enhancement via Breaking Down the Darkness"
scott-mao/cluster-contrast-reid
scott-mao/DexiNed
DexiNed: Dense EXtreme Inception Network for Edge Detection
scott-mao/DocTr
The official code for “DocTr: Document Image Transformer for Geometric Unwarping and Illumination Correction”, ACM MM, Oral Paper, 2021.
scott-mao/hand_gesture_detect
using CNN and LSTM to build static and dynamic hand gesture detect
scott-mao/LLFlow
The code release of paper "AAAI Low-Light Image Enhancement with Normalizing Flow", AAAI 2022
scott-mao/Masked-Face-Recognition-KD
Mask-invariant Face Recognition through Template-level Knowledge Distillation
scott-mao/Semantic-Guided-Low-Light-Image-Enhancement
This is the official Pytorch implementation for our paper "Semantic-Guided Zero-Shot Learning for Low-Light Image/Video Enhancement."
scott-mao/sparseml
Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
scott-mao/TransMEF
Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework via Self-Supervised Multi-Task Learning. Code will be available soon.