dome272/Diffusion-Models-pytorch

What should I change if I want to generate grayscale image out of grayscale image dataset?

devshaww opened this issue · 4 comments

What should I change if I want to generate grayscale image out of grayscale image dataset?

Hey,
you would change the in and out channels here:

def __init__(self, c_in=3, c_out=3, time_dim=256, num_classes=None, device="cuda"):

That should be it. And then your DataLoader would need to be adjusted too.

Hi outlier, i am working on a greyscale ultrasound dataset with having image size is 265*256 consisting of 5 different classes. i modified the image size in class SelfAttention, also i changed the i/p and O/p channels for greyscale images. but when i run the modules.py i face this error. kindly help out.
image

error also, tell me why its samples 10 images every time during training. can we change it?

This is an improved codebase here: https://github.com/tcapelle/Diffusion-Models-pytorch
I think it implementes easy handling of different image resolutions

This is an improved codebase here: https://github.com/tcapelle/Diffusion-Models-pytorch I think it implementes easy handling of different image resolutions
image

am still facing this issue. i tried on this repo but still stuck here. i want to work on Greyscale images having a size is 256. kindly let me guide what should i do.
Thanks