/MiDaS-Computer-Vision-Depth-Estimation

This Python code uses the MiDaS model for real-time depth estimation on webcam video. MiDaS predicts the relative depth of objects in a scene, and the output is displayed using matplotlib. Bicubic interpolation is used to upsample the low-resolution depth map produced by the model. Output is then displayed in an interactive Streamlit web app.

Primary LanguagePython

MiDaS-CV-Depth-Estimation

mainV2.py Description

Incorporated mainV1.py into a streamlit web application

mainV2.py Output:

image

mainV1.py Description

This Python code uses the MiDaS (Mixed Densely Associated Scale) model to perform depth estimation on live video from a webcam.

The code uses the torch.hub library to download the MiDaS model and its associated transforms pipeline, and then hooks into OpenCV to read live video frames from the webcam.

For each frame, the input is transformed using the MiDaS transforms pipeline, and depth predictions are made using the MiDaS model. The output is then displayed as an image using the matplotlib library.

The code also displays the original video frames in a window using OpenCV's imshow() function. The video stream can be stopped by pressing the 'q' key on the keyboard.

mainV1.py Output:

image