๐ฃ Do not waste your time with this old repo; here is the working one!
NeoAnthropocene opened this issue ยท 7 comments
Hi guys,
I spent my hours with this repo. First it worked and then something happened and I can not run the script now.
Hear my advice; do not waste your time with struggling this old repo which is quite dead.
This is an extension plugin link below for A1111. If you're interested with this kind of image creation, you're highly possible to get your hands on A1111.
The plugin works flawlessly without any setup hassle ๐ฅ
๐ https://github.com/thygate/stable-diffusion-webui-depthmap-script
๐ This is the example that shows how you can use.
โ Please come and write feedback here after you had been successful, so any other people can also benefit from this info.
Thanks.
Hello, NeoAnthropocene!
Explain us clearly, please. What this project about?
I've already build 2 depth mask PNG files for one source photo in Google Collab. I'm not familiar about this topic.
What should I do the next step.
This project was pretty simply (3d-photo-inpainting). It helped me to make a short mp4 video automatically in Collab environment. What about this one? If you suggest alternative way you probably should give anyone a short explanation! Thank you in advance!
I don't have (as many others people also) a powerful computer with GPU. That's why I used to current project '3d-photo-inpainting' for Goggle Colab. Is it possible to use the same for new project you are suggesting? If it is, please share your link for Colab. Thanks.
I don't have (as many others people also) a powerful computer with GPU. That's why I used to current project '3d-photo-inpainting' for Goggle Colab. Is it possible to use the same for new project you are suggesting? If it is, please share your link for Colab. Thanks.
It seems you can but I never tried.
Can I run this on Google Colab?
- You can run the MiDaS network on their colab linked here https://pytorch.org/hub/intelisl_midas_v2/
- You can run BoostingMonocularDepth on their colab linked here : https://colab.research.google.com/github/compphoto/BoostingMonocularDepth/blob/main/Boostmonoculardepth.ipynb
- Running this program on Colab is not officially supported, but it may work. Please look for more suitable ways of running this. If you still decide to try, standalone installation may be easier to manage.
Hello, NeoAnthropocene!
Explain us clearly, please. What this project about? I've already build 2 depth mask PNG files for one source photo in Google Collab. I'm not familiar about this topic. What should I do the next step. This project was pretty simply (3d-photo-inpainting). It helped me to make a short mp4 video automatically in Collab environment. What about this one? If you suggest alternative way you probably should give anyone a short explanation! Thank you in advance!
This project is a Depth plugin (addon) for A1111, and it has 3D photo inpainting mode derived from this repo. And the repo can also be run in standalone mode (I never tried it but I'm also willing to use it in standalone mode).
It worked well and I created some good marketing assets for a campaign project with the images 1080*1920 sizes. Here you can see from examples from the links 1, 2.
Hi! Thanks for reply! Then I've made a decision that this project isn't developed for Colab and it's suitable for standalone computer with GPU. Unfortunately, it's not a solution for me. What can I say..I have different expectations though I saw 2 examples were given by you as the examples with an uncover interest (just to compare its results with old ones). In my perspective, the old original project had more deep quality of movement for background layer in paralax mode (linear zoom-in/zoom-out). I'm afraid, but I wasn't impressed the results at all. It would be better to use Adobe AfterEffects in manual mode to achieve the more strongest result than it is. But it takes lots of time to spent for 1 photograph.
Examples (russian lang): https://www.youtube.com/watch?v=ZPSX3ouYFqM
Unbeliavable result here: https://www.youtube.com/watch?v=YXRiTMJ6HR0
Hi! Thanks for reply! Then I've made a decision that this project isn't developed for Colab and it's suitable for standalone computer with GPU. Unfortunately, it's not a solution for me. What can I say..I have different expectations though I saw 2 examples were given by you as the examples with an uncover interest (just to compare its results with old ones). In my perspective, the old original project had more deep quality of movement for background layer in paralax mode (linear zoom-in/zoom-out). I'm afraid, but I wasn't impressed the results at all. It would be better to use Adobe AfterEffects in manual mode to achieve the more strongest result than it is. But it takes lots of time to spent for 1 photograph. Examples (russian lang): https://www.youtube.com/watch?v=ZPSX3ouYFqM Unbeliavable result here: https://www.youtube.com/watch?v=YXRiTMJ6HR0
Oh, if you're referring to the depth of the parallax effect, I intentionally designed it that way. I aimed to create a Vertigo Effect with the images. However, it becomes distorted when you increase the intensity of the effect.
Nevertheless, you can't achieve the quality of a homemade After Effects style with the both github repos. It's a completely different approach. With these AI github repos and a powerful graphics card, you can accomplish it in just 15 minutes. In AE, though, achieving the same level of quality and style on the videos that you showed me, would require significantly more time, especially if you don't have expertise in this type of effect. The choice depends on your specific style and quality requirements for your project.
Cheers.