/vivid

VIVID - Virtual Environment for Visual Deep Learning is an open-source project developed by NTU NetDB (http://arbor.ee.ntu.edu.tw/) and AIoT Lab (http://www.aiotlab.org)

Primary LanguagePython

VIVID - Virtual Environment for Visual Deep Learning

VIVID (VIrtual environment for VIsual Deep learning) is a photo-realistic simulator that aims to facilitate deep learning for computer vision. VIVID supports four different characters: robot (mannequin), simple drone, AirSim drone and automobile. Twelve large, diversified indoor and outdoor scenes are included. In addition, we create NPC with simulated human actions to mimic real world events, such as gun shooting and forest fire rescue. VIVID is based on Unreal Engine and Microsoft AirSim. Documentation and tutorials can be found in our WiKi page.

Samba dance Fly in a ruined school
Forest fire Gun shooting detection

Architecture

The architecture of VIVID is shown below. Our system is powered by Unreal and leverages AirSim plugin for hardware simulation and control. The remote procedure call (RPC) is used to communicate with external programming languages. Currently VIVID supports four different characters: robot, simple drone, AirSim drone and automobile. User can select characters and scenes by using in-game menu.

Documentation

The documents and tutorials are in our GitHub WiKi.

Human Actions

Some examples of human actions in VIVID. The actions from left to right are shooting, dying, jumping, walking, surrendering, moaning in pain, running, police running with rifle, crouching and dancing. Most action models can be downloaded from Maximo. Human Action Examples

Download Source Code

The source code and UE4 project file can be downloaded from source folder /source. Note that you need to install UE4 editor first. We only support Windows now. Linux version is coming soon.

Download Binaries

The pre-compiled binary files can be downloaded here:

Python Controls

See examples in /python_example