- Installation
- Demos
- Eva core
- Eva storage
- Dataset
- Clone the repo
- Create a virtual environment with conda (explained in detail in the next subsection)
- Run following command to configure git hooks
git config core.hooksPath .githooks
- Install conda - we have prepared a yaml file that you can directly use with conda to install a virtual environment
- Navigate to the eva repository in your local computer
- conda env create -f environment.yml
- Note, this yaml file should install and all code should run with no errors in Ubuntu 16.04. However, there are know installation issues with MacOS.
We have demos for the following components:
- Eva analytics (pipeline for loading the dataset, training the filters, and outputting the optimal plan)
cd <YOUR_EVA_DIRECTORY>
python pipeline.py
- Eva Query Optimizer (Will show converted queries for the original queries)
cd <YOUR_EVA_DIRECTORY>
python query_optimizer/query_optimizer.py
- Eva Loader (Loads UA-DETRAC dataset)
cd <YOUR_EVA_DIRECTORY>
python loaders/load.py
NEW!!! There are new versions of the loaders and filters.
cd <YOUR_EVA_DIRECTORY>
python loaders/uadetrac_loader.py
python filters/minimum_filter.py
- EVA storage-system (Video compression and indexing system - currently in progress)
Eva core is consisted of
- Query Optimizer
- Filters
- UDFs
- Loaders
The query optimizer converts a given query to the optimal form.
All code related to this module is in /query_optimizer
The filters does preliminary filtering to video frames using cheap machine learning models. The filters module also outputs statistics such as reduction rate and cost that is used by Query Optimizer module.
The preprocessing method below is running:
- PCA
The filters below are running:
- KDE
- DNN
- Random Forest
- SVM
All code related to this module is in /filters
This module contains all imported deep learning models. Currently, there is no code that performs this task. It is a work in progress. Information of current work is explained in detail here.
All related code should be inside /udfs
The loaders load the dataset with the attributes specified in the Accelerating Machine Learning Inference with Probabilistic Predicates by Yao et al.
All code related to this module is in /loaders
Currently a work in progress. Come check back later!
Dataset info explains detailed information about the datasets