xwhkkk's Stars
binary-husky/gpt_academic
为GPT/GLM等LLM大语言模型提供实用化交互接口,特别优化论文阅读/润色/写作体验,模块化设计,支持自定义快捷按钮&函数插件,支持Python和C++等项目剖析&自译解功能,PDF/LaTex论文翻译&总结功能,支持并行问询多种LLM模型,支持chatglm3等本地模型。接入通义千问, deepseekcoder, 讯飞星火, 文心一言, llama2, rwkv, claude2, moss等。
facebookresearch/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
kaixindelele/ChatPaper
Use ChatGPT to summarize the arXiv papers. 全流程加速科研,利用chatgpt进行论文全文总结+专业翻译+润色+审稿+审稿回复
WongKinYiu/yolov7
Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
windingwind/zotero-better-notes
Everything about note management. All in Zotero.
ChaoningZhang/MobileSAM
This is the official code for MobileSAM project that makes SAM lightweight for mobile applications and beyond!
AprilNEA/ChatGPT-Admin-Web
One-stop system for shared use of AI within teams and organizationswith | 在团队和组织内共享使用人工智能的一站式系统
xinghaochen/awesome-hand-pose-estimation
Awesome work on hand pose estimation/tracking
ahmetbersoz/chatgpt-prompts-for-academic-writing
This list of writing prompts covers a range of topics and tasks, including brainstorming research ideas, improving language and style, conducting literature reviews, and developing research plans.
z-x-yang/Segment-and-Track-Anything
An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) for key-frame segmentation and Associating Objects with Transformers (AOT) for efficient tracking and propagation purposes.
yformer/EfficientSAM
EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything
hkchengrex/XMem
[ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
chongzhou96/EdgeSAM
Official PyTorch implementation of "EdgeSAM: Prompt-In-the-Loop Distillation for On-Device Deployment of SAM"
SciSharp/Numpy.NET
C#/F# bindings for NumPy - a fundamental library for scientific computing, machine learning and AI
PruneTruong/DenseMatching
Dense matching library based on PyTorch
yoxu515/aot-benchmark
An efficient modular implementation of Associating Objects with Transformers for Video Object Segmentation in PyTorch
kentaroy47/vision-transformers-cifar10
Let's train vision transformers (ViT) for cifar 10!
hellozhuo/pidinet
Code for the ICCV 2021 paper "Pixel Difference Networks for Efficient Edge Detection" (Oral).
qitianwu/DIFFormer
The official implementation for ICLR23 spotlight paper "DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained Diffusion"
vietanhdev/samexporter
Export Segment Anything Models to ONNX
mitvis/olli
A library for converting web visualizations into accessible text structures for blind and low-vision screen reader users.
YunzhuLi/VisGel
[CVPR 2019] Connecting Touch and Vision via Cross-Modal Prediction
RuihanGao/visual-tactile-synthesis
We synthesize synchronized visual appearance and tactile geometry given a sketch of objects and render the multimodal output on a surface haptic device called TanvasTouch.
AndreyGermanov/sam_onnx_full_export
This repository shows how to solve ONNX export issue in Segment Anything model
pslade2/AugmentedCane
The Augmented Cane project improves the mobility of people with impaired vision by using cutting-edge robotics.
fredfyyang/Touch-and-Go
fredfyyang/vision-from-touch
elisakreiss/contextref
shaoyuca/FrictGAN-Image-to-Friction-Generation
FrictGAN-Sorce code and dataset for CAG journal
elisakreiss/contextual-description-evaluation