/deepdreamtravel

Generate deep dream videos from a single image.

Primary LanguagePython

Deep Dream Travel

Deep Dream Travel allows you to turn a single image into a video, diving through various layers of a neural network.

This effect is achieved by recursively feeding the image into the network and regularly switching the targeted layer.

Demo

Deep Dream Demo

Higher resolution video

A higher resolution example can be found on YouTube.

Installation

Create and activate a new Anaconda environment

conda create --name DeepDreamTravel
conda activate DeepDreamTravel

Install required dependencies

pip install -r requirements.txt
conda install psutil

Install Caffe depending on your operating system

conda install caffe <-- Linux
conda install caffe -c willyd <-- Windows

Usage

Command line

python deepdreamtravel.py INPUT_FILE OUTPUT_VIDEO

or view all options

python deepdreamtravel.py -h

Within an application

View example.py for an example of running DeepDreamTravel inside your application. Available arguments are listed bellow.

from deepdreamtravel import DeepDreamTravel

dreamer = DeepDreamTravel('protxt', 'caffemodel') # View example.py for an example
dreamer.generate(input_image="noise.jpg",
                 node_switch=10, # How often should layers switch
                 resize_coeff=0.05, # Zoom amount in %
                 show_iter=100, # The image will be shown every X iterations
                 offset=[0,0,0,0], # Left, Top, Right, Bottom zoom offset
                 temp_dir="tmp", # Temp directory to store frames
                 start_iter=0, # If generation was interrupted, enter the next iteration here
                 start_index=0, # Which layer index should be used first. Also used when interruption occured
                 start_offset=0, # In-case there are invalid layers, enter the offset given by terminal output when interruption happened.
                 max_iteration=False, # If false, all layers will be "explored". Else set maximum number of iterations
                 octaves=4, # Higher number leads to bigger ("better") visuals. Takes significantly more time to generate! Lower number in-case of errors with small images.
                 iter_n=10, # Number of iterations per octave.
                 octave_scale=1.4, # Resize amount per octave. Higher number leads to higher dream states.
                 output_video="DeepDreamTravel.avi", # Output location and name. Don't forget to end with .avi!
                 fps=30, # Frames per second
                 delete_temp=True, # Clean up generated images after video was generated?
                 max_memory=3 # Maximum amount of memory to use in GB
)

Note: The examples were generated with Places205-GoogLeNet. You can find other interesting models here.

Dependencies

Support

If you are having issues with the installation, Nerdy Rodent was kind enough to create a tutorial.

Additionally, support, questions, and requests are provided here

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

License

MIT