The MaViS (Machine Vision Security) system is a machine learning based security platform that automatically monitors and detects people in a scene, and then alerts the user in real time by sending an image and video to their email. The system is enabled through a combination of edge computing and cloud infrastructure. The edge platform used was the Nvidia Jetson Nano 4GB Developer Kit, and the cloud infrastructure was built using Amazon Web Services (AWS).
This project is a final deliverable for the Full Stack Deep Learning course.
Implementing the edge component the project went through three interations. This repository only contains the code of the Nvidia Jetson Nano used in the final version of the project.
A short report is available that includes:
- A more detailed project history.
- A description of the engineering design.
- The process of setting up the Raspberry Pi 4, the Jetson Nano and AWS.
A full video demo and project explanation can be found here.
This repository only contains the details and code to setup the Nvidia Jetson Nano and run the MaViS software.
Setting up the Jetson Nano includes the following steps:
- Install JetPack 4.5.1
- Install DeepStream SDK 5.1
- Install MaViS
- Setup AWS Connection (optional)
- Run MaViS
To install JetPack 4.5.1, please follow the instructions here.
To install DeepStream SDK 5.1, plese follow the instructions here.
To install MaViS clone this repository onto the Jetson Nano:
$ git clone https://github.com/jasondeglint/MaViS.git
To setup AWS run the following commands:
$ sudo apt install python3-pip
$ pip3 install boto3
$ pip3 install awscli --upgrade --user
To setup your login credentials run the following command:
$ python3 -m awscli configure
This will create a credentials
file in the ~/.aws
folder.
To check that the install and credentials are working, run:
$ pip3 install awscli --upgrade --user
The Python code for the Nvidia Jetson contains two scripts:
- The
main.py
script monitors the video stream and automatically saves frames that contain a positive classification. - The
monitor_and_upload.py
script uploads a sample image as soon as an intruder enters a scene, and then also uploads a video once the intruder leaves the scene.
To properly run the entire system you must run both scripts at the same in two separate terminals.
To run the DeepStream code:
$ python3 main.py <v4l2-device-path> <output-folder-name>
For example:
$ python3 main.py /dev/video0 ~/images/
To run the montoring code:
$ python3 monitor_and_upload.py <input-folder-name> <archive-folder-name> UPLOAD_TO_AWS
Where, UPLOAD_TO_AWS
is a boolean. If you want to upload to AWS, enter true
or True
. To not upload you can enter any other chacter(s).
For example:
$ python3 monitor_and_upload.py ~/images/ ~/archive/ False