Pinned Repositories
current-lane-drivable
本项目采用的网络模型为mask-rcnn,代码主要来源于开源项目[matterport project](https://github.com/matterport/Mask_RCNN)。
Docker_Tutorial
Lane-Detection-Based-PINet
the project use PINet as lane detector, supporting training on [VIL-100](https://github.com/yujun0-0/MMA-Net/tree/main/dataset). At the same time, the project supports model conversion, including onnx and caffe formats, as well as model forward acceleration processing before model deployment.Model forward acceleration mainly includes model cutting, simplification and merge batchnorm layer. Meanwhile, it includes the verification and comparison of the conformance after the model conversion.
Parse_Curvelanes
parse curvelanes datasets
parse_vil100
for parsing and converting dataset vil100.
SemanticSegmentation_DL
Resources of semantic segmantation based on Deep Learning model
state-of-the-art-result-for-machine-learning-problems
This repository provides state of the art (SoTA) results for all machine learning problems. We do our best to keep this repository up to date. If you do find a problem's SoTA result is out of date or missing, please raise this as an issue or submit Google form (with this information: research paper name, dataset, metric, source code and year). We will fix it immediately.
toy-classification-pytorch
carrier of tricks for image classification tutorials using pytorch.
tutorials
机器学习相关教程
pandamax's Repositories
pandamax/Parse_Curvelanes
parse curvelanes datasets
pandamax/parse_vil100
for parsing and converting dataset vil100.
pandamax/Lane-Detection-Based-PINet
the project use PINet as lane detector, supporting training on [VIL-100](https://github.com/yujun0-0/MMA-Net/tree/main/dataset). At the same time, the project supports model conversion, including onnx and caffe formats, as well as model forward acceleration processing before model deployment.Model forward acceleration mainly includes model cutting, simplification and merge batchnorm layer. Meanwhile, it includes the verification and comparison of the conformance after the model conversion.
pandamax/current-lane-drivable
本项目采用的网络模型为mask-rcnn,代码主要来源于开源项目[matterport project](https://github.com/matterport/Mask_RCNN)。
pandamax/awesome-embedded-ai
【WeChat: NeuroMem】. Weekly report and awesome list of embedded-ai.
pandamax/toy-classification-pytorch
carrier of tricks for image classification tutorials using pytorch.
pandamax/apollo
An open autonomous driving platform
pandamax/awesome-c-cn
C 资源大全中文版,包括了:构建系统、编译器、数据库、加密、初中高的教程/指南、书籍、库等。
pandamax/awesome-cpp-cn
C++ 资源大全中文版,标准库、Web应用框架、人工智能、数据库、图片处理、机器学习、日志、代码分析等
pandamax/Classifier
Classifier by pytorch
pandamax/cmake-examples
Useful CMake Examples
pandamax/cmake_examples
Practical, Easy-to-copy CMake examples
pandamax/cvat
Powerful and efficient Computer Vision Annotation Tool (CVAT)
pandamax/cvml_project
Projects and application using computer vision and machine learning
pandamax/deepvac
PyTorch python project standard.
pandamax/fritz-models
Train and deploy machine learning models for mobile apps with Fritz.
pandamax/HowToCook
程序员在家做饭方法指南。Programmer's guide about how to cook at home (Chinese).
pandamax/HRNet-Semantic-Segmentation
The OCR approach is rephrased as Segmentation Transformer: https://arxiv.org/abs/1909.11065. This is an official implementation of semantic segmentation for HRNet. https://arxiv.org/abs/1908.07919
pandamax/learngit
pandamax/learnopencv
Learn OpenCV : C++ and Python Examples
pandamax/maskrcnn-benchmark
Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.
pandamax/netron
Visualizer for neural network, deep learning and machine learning models
pandamax/pandamax.github.io
pandamax/PINet_new
pandamax/python-api-tesing
python中文库-python人工智能大数据自动化接口测试开发。 书籍下载及python库汇总https://china-testing.github.io/
pandamax/pytorch_tricks
some tircks for PyTorch
pandamax/slambook2
edition 2 of the slambook
pandamax/TLCL
《快乐的 Linux 命令行》
pandamax/wechat-public-account-push
微信公众号推送
pandamax/YOLOP
You Only Look Once for panopitic driving perception.(https://arxiv.org/abs/2108.11250)