Code repository for experiments and analyses described in the thesis chapter "Towards a task-based computational benchmark for the evaluation of prosthetic vision". (see this doctoral thesis by Jaap de Ruyter van Steveninck).
This chapter is in preparation for submission as:
de Ruyter van Steveninck, J., Danen, S., Küçükogl u, B., Güçlü, U., van Wezel, R., & van Gerven, M. (2024). Deep reinforcement learning for evaluation and optimization of prosthetic vision.
Neuroprosthetic visual implants are a promising technology to restore some form of visual perception for persons with blindness. As research is actively investigating different prototype designs, the field is looking for easy, cost-effective and non-invasive simulation paradigms to speed up the experimental cycle of hypothesis testing. In this study, we propose a deep-reinforcement-learning-based computational framework for that purpose.
A virtual implant user (powered by a PyTorch double-Q RL network) performs a mobility task in a 3D visual environment (in the Unity game engine). The experiments are modeled modeled after a previously published simulation study with sighted human participants (de Ruyter van Steveninck et al., 2022). The agent moves through a virtual hallway, taking actions based on simulated prosthetic visual input. Several experimental parameters can be freely adjusted, such as the complexity of the hallway, the resolution of the phosphene vision, the image processing, etc.
- Jaap de Ruyter
- Sam Danen
- Umut Güçlü
To run the pre-build Unity environment:
- Start the application:
Unity/Build/Windows/RL_Hallway.exe
orUnity/Build/Linux/RL-Hallway.x86_64
depending on your operating system. - Specify environment variables, IP address, etc. and press 'start'.
Basic example (see Demo directory for more):
import pyClient
import matplotlib.pyplot as plt
# Connect to the environment
environment = pyClient.Environment(ip = "127.0.0.1", port = 13000)
# Reset the environment
_, _, _ = environment.reset(kind=0) # (0: plain, 1: complex)
# Move the agent in the environment
end, reward, state = environment.step(action=0) # (0: forward, 1: left, 2: right)
# Visualize the current state observation
plt.imshow(state['colors'] )
The pipeline can be trained using a YAML configuration file or a CSV file that specifies multiple training instances. The following command initiates the training with the demo configuration:
$ python training.py -c _config.yaml
To use and adapt the source code for the Unity environment, simply:
- Install Unity (we used v2019.3.7f1, but other versions should work as well)
- Launch Unity Hub and press 'ADD' to add the source code (directory:
Unity/indoor-mobility
) as a new project. - (optional) Build the application to enable launching it outside Unity.
To reproduce the analysis for the thesis chapter:
- Download the experimental results from here and save them to the output directory:
Python/Experiments/Out/
. - Run the jupyter notebook for the data analysis:
Python/Experiments/data_analysis_phd_thesis_dec2023.ipynb
.
To run experiments yourself without the need of installing Unity:
- Run the pre-built Unity application hosting the virtual environment.
- Start the application:
Unity/Build/Windows/RL_Hallway.exe
orUnity/Build/Linux/RL-Hallway.x86_64
depending on your operating system. - Specify environment variables, IP address, etc. and press 'start'.
- Start the application:
- Choose a training configuration:
- Either adjust the _config.yaml according to your own preference
- Or use the train specifications of the experiments described in the chapter, which are located in
Experiments/_train_specs/
- Run the python reinforcement learning client:
$ python Python/training.py -c _config.yaml
- or
$ python Python/training.py -s Experiments/_train_specs/<csv train specification>
For the usage and implementation, refer to the demo notebook PythonScrips/demoUsage.ipynb
.
To run a demo navigation run through the environment:
- Start the environment server by running the Unity server application which can be found in the Unity Build folder. In the GUI, press 'start' to accept the default environment parameters.
- run
python Python/demoNavigation.py
Controls:
- 1: normal vision
- 2: low resolution prosthethic vision
- 3: high resolution prosthetic vision
- w: forward
- a: left
- d: right
- r: reset
- q: quit