lucidrains/DALLE2-pytorch

Why parameter [timesteps] is fixed?

4daJKong opened this issue · 4 comments

I found that in line 2308, the creating of NoiseScheduler layer depends on timesteps in dalle2_pytorch.py.
Did it means I cannot change (increase or decrese) the value of timesteps if I have loaded a pretrained model when testing?
'sampling loot time steps =1000' in decoder is too long for me, and I haven't other parameter, like sample_timesteps in prior, that can control this process.

I meet the same problem, do you find the answer?

I meet the same problem, do you find the answer?

Hi, adding "default_sample_timesteps": [100], after "default_cond_scale": [1.7], in gradio.example.json can adjust the sample_timesteps of decoder.

I meet the same problem, do you find the answer?

Hi, adding "default_sample_timesteps": [100], after "default_cond_scale": [1.7], in gradio.example.json can adjust the sample_timesteps of decoder.

Thank you very very much.

I meet the same problem, do you find the answer?

Hi, adding "default_sample_timesteps": [100], after "default_cond_scale": [1.7], in gradio.example.json can adjust the sample_timesteps of decoder.

Hello! I add "default_sample_timesteps": [100], after "default_cond_scale": [1.7], in gradio.example.json, but meet the error below:
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 3 validation errors for DecoderConfig
sample_timesteps
value is not a valid integer (type=type_error.integer)
sample_timesteps -> 1
none is not an allowed value (type=type_error.none.not_allowed)
sample_timesteps
wrong tuple length 2, expected 1 (type=value_error.tuple.length; actual_length=2; expected_length=1)

Do you know how to deal with this bug? Thank you very much!