This is the implementation of the paper Segment-Based, User-Generated Image Styling with Neural Style Transfer
What does it do?
- Generate style images from a prompt provided by the user
- Train a Transformation network on the generated style image
- Apply style transfer on one specific segment from the content image
Here's a glimpse of how it works:
Note: Before playing around with the code, make sure to add your own hugging face token in the first cell.