WingkeungM
Remote Sensing,Computer Vision Postdoc Fellow @ Tsinghua University
WHU -> UCAS -> THUBeijing, China
WingkeungM's Stars
f/awesome-chatgpt-prompts
This repo includes ChatGPT prompt curation to use ChatGPT better.
binary-husky/gpt_academic
为GPT/GLM等LLM大语言模型提供实用化交互接口,特别优化论文阅读/润色/写作体验,模块化设计,支持自定义快捷按钮&函数插件,支持Python和C++等项目剖析&自译解功能,PDF/LaTex论文翻译&总结功能,支持并行问询多种LLM模型,支持chatglm3等本地模型。接入通义千问, deepseekcoder, 讯飞星火, 文心一言, llama2, rwkv, claude2, moss等。
twitter/the-algorithm
Source code for Twitter's Recommendation Algorithm
facebookresearch/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
XingangPan/DragGAN
Official Code for DragGAN (SIGGRAPH 2023)
lllyasviel/ControlNet
Let us control diffusion models!
IDEA-Research/Grounded-Segment-Anything
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
OpenGVLab/DragGAN
Unofficial Implementation of DragGAN - "Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold" (DragGAN 全功能实现,在线Demo,本地部署试用,代码、模型已全部开源,支持Windows, macOS, Linux)
UX-Decoder/Segment-Everything-Everywhere-All-At-Once
[NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"
OpenDriveLab/UniAD
[CVPR 2023 Best Paper Award] Planning-oriented Autonomous Driving
OpenGVLab/InternGPT
InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)
fudan-zvg/Semantic-Segment-Anything
Automated dense category annotation engine that serves as the initial semantic labeling for the Segment Anything dataset (SA-1B).
JiauZhang/DragGAN
Implementation of DragGAN: Interactive Point-based Manipulation on the Generative Image Manifold
Pointcept/Pointcept
Pointcept: a codebase for point cloud perception research. Latest works: PTv3 (CVPR'24 Oral), PPT (CVPR'24), OA-CNNs (CVPR'24), MSC (CVPR'23)
VainF/Awesome-Anything
General AI methods for Anything: AnyObject, AnyGeneration, AnyModel, AnyTask, AnyX
facebookresearch/ConvNeXt-V2
Code release for ConvNeXt V2 model
IDEA-Research/MaskDINO
[CVPR 2023] Official implementation of the paper "Mask DINO: Towards A Unified Transformer-based Framework for Object Detection and Segmentation"
Pointcept/SegmentAnything3D
[ICCV'23 Workshop] SAM3D: Segment Anything in 3D Scenes
tusen-ai/SST
Code for a series of work in LiDAR perception, including SST (CVPR 22), FSD (NeurIPS 22), FSD++ (TPAMI 23), FSDv2, and CTRL (ICCV 23, oral).
dvlab-research/3D-Box-Segment-Anything
We extend Segment Anything to 3D perception by combining it with VoxelNeXt.
PaddlePaddle/PaddleRS
Awesome Remote Sensing Toolkit based on PaddlePaddle.
ZrrSkywalker/MonoDETR
[ICCV 2023] The first DETR model for monocular 3D object detection with depth-guided transformer
OpenDriveLab/ST-P3
[ECCV 2022] ST-P3, an end-to-end vision-based autonomous driving framework via spatial-temporal feature learning.
TuSimple/centerformer
Implementation for CenterFormer: Center-based Transformer for 3D Object Detection (ECCV 2022)
E2E-AD/AD-MLP
skyhehe123/VoxSeT
Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR 2022)
Haiyang-W/CAGroup3D
[NeurIPS2022] This is the official code of "CAGroup3D: Class-Aware Grouping for 3D Object Detection on Point Clouds".
lxn96/ICPE
The offical code for paper "Breaking Immutable: Information-Coupled Prototype Elaboration for Few-Shot Object Detection"
HanboBizl/DMNet
DMNet for Few-shot Segmentation
chunbolang/R2Net
Official PyTorch Implementation of Global Rectification and Decoupled Registration for Few-Shot Segmentation in Remote Sensing Imagery (TGRS'23).