Pinned Repositories
Code-mix
Cpp-C-Prime
C++ prime 5th
cs231n-Spring2019-assignment
This is the all answer for stanford course CS231n spring 2019.
deep_sort_pytorch
MOT tracking using deepsort and yolov3 with pytorch
geerpc
go-micro-demo
Posture-and-Fall-Detection-System-Using-3D-Motion-Sensors
This work presents a supervised learning approach for training a posture detection classifier, and implementing a fall detection system using the posture classification results as inputs with a Microsoft Kinect v2 sensor. The Kinect v2 skeleton tracking provides 3D depth coordinates for 25 body parts. We use these depth coordinates to extract seven features consisting of the height of the subject and six angles between certain body parts. These features are then fed into a fully connected neural network that outputs one of three considered postures for the subject: standing, sitting, or lying down. An average classification rate of over 99.30% for all three postures was achieved on test data consisting of multiple subjects where the subjects were not even facing the Kinect depth camera most of the time and were located in different locations. These results show the feasibility to classify human postures with the proposed setup independently of the location of the subject in the room and orientation to the 3D sensor.
Python_OpenNI2
Sample applications that use the official python wrappers for OpenNI 2 (opencv)
SOTS
Single object tracking and segmentation.
TracKit
[ECCV'20] Ocean: Object-aware Anchor-Free Tracking
n1-k0's Repositories
n1-k0/Code-mix
n1-k0/Cpp-C-Prime
C++ prime 5th
n1-k0/cs231n-Spring2019-assignment
This is the all answer for stanford course CS231n spring 2019.
n1-k0/deep_sort_pytorch
MOT tracking using deepsort and yolov3 with pytorch
n1-k0/geerpc
n1-k0/go-micro-demo
n1-k0/Posture-and-Fall-Detection-System-Using-3D-Motion-Sensors
This work presents a supervised learning approach for training a posture detection classifier, and implementing a fall detection system using the posture classification results as inputs with a Microsoft Kinect v2 sensor. The Kinect v2 skeleton tracking provides 3D depth coordinates for 25 body parts. We use these depth coordinates to extract seven features consisting of the height of the subject and six angles between certain body parts. These features are then fed into a fully connected neural network that outputs one of three considered postures for the subject: standing, sitting, or lying down. An average classification rate of over 99.30% for all three postures was achieved on test data consisting of multiple subjects where the subjects were not even facing the Kinect depth camera most of the time and were located in different locations. These results show the feasibility to classify human postures with the proposed setup independently of the location of the subject in the room and orientation to the 3D sensor.
n1-k0/Python_OpenNI2
Sample applications that use the official python wrappers for OpenNI 2 (opencv)
n1-k0/SOTS
Single object tracking and segmentation.
n1-k0/pytracking
Visual tracking library based on PyTorch.
n1-k0/ruan-jian-kai-fa
n1-k0/siamban
Siamese Box Adaptive Network for Visual Tracking
n1-k0/sort
Simple, online, and realtime tracking of multiple objects in a video sequence.
n1-k0/tech
2021面试题,Java面试题、JVM面试题、多线程面试题、并发编程、设计模式面试题、Spring面试题、MyBatis面试题、ZooKeepe面试题r、Dubbo面试题、Elasticsearch面试题、Memcached面试题、MongoDB面试题、Redis面试题、MySQL面试题、RabbitMQ面试题、Kafka面试题、Linux面试题、Netty面试题、Tomcat面试题、Python面试题、HTML面试题、CSS面试题、Vue面试题、React面试题、JavaScript面试题、Android面试题
n1-k0/test
n1-k0/updatenet
Learning the Model Update for Siamese Trackers (ICCV 2019)
n1-k0/yolov5
YOLOv5 in PyTorch > ONNX > CoreML > TFLite