Issues
- 2
how to decide the value of λ
#60 opened by silentsaber - 5
Memory Leak
#21 opened by SivanKe - 4
WGAN-gp loss keeps going large
#54 opened by haonanhe - 1
一堆错误
#61 opened by zhangzherui123 - 3
- 1
The aixs of norm of the gradient
#56 opened by CharlesNord - 1
sometimes loss is negative during training model
#45 opened by hw118118 - 2
- 0
- 0
Is the G_loss wrong?
#58 opened by fxcdl - 1
The loss funciont is wrong in the implement?
#57 opened by wangpengabc - 1
mone in generator
#32 opened by udion - 0
A question about Dcost
#55 opened by etoilestar - 2
about wgan-gp
#41 opened by wtmlon - 5
D_real.backward(mone) RuntimeError: invalid gradient at index 0 - expected shape [] but got [1]
#42 opened by ramesh720 - 2
- 2
Index [0] of grad tensor
#35 opened by ahmed-fau - 1
bug in calc_gradient_penalty?
#47 opened by lezhang-thu - 0
Penalizing Norm of Jacobian
#53 opened by RylanSchaeffer - 2
It requires grad clip in original paper which seems to be ignored in this implementation.
#49 opened by Yasheng-Sun - 1
- 0
(op: 'FusedBatchNorm') with input shapes: [64,256,8,8], [256], [256], [0], [0].
#52 opened by Ronzhen - 0
How to download the language dataset?
#50 opened by mathfinder - 3
question of the code
#20 opened by tartarleft - 0
Mode Collapse WGAN
#48 opened by jlevy44 - 0
- 0
Why d_loss is rising
#34 opened by magnificGH - 6
gradients.norm(2, dim=1), dim=1?
#25 opened by LynnHo - 1
- 1
I'm testing your code on images of size 240,360, generator gradient converge to zero
#29 opened by fsalmasri - 0
PyTorch version
#40 opened by Haicang - 0
How to use run these code?
#39 opened by yangzhikai - 1
Why did u only backward the GP
#36 opened by timho102003 - 1
I'm try to use Gluon implement this
#38 opened by PistonY - 0
- 0
- 3
Issues with Python3 Version
#26 opened by jmandivarapu1 - 2
- 1
grad_outputs for gradient penalty
#28 opened by leesunfreshing - 3
Data parallel problem (with multi GPUs)
#22 opened by bcd33bcd - 1
an error occurred
#23 opened by lzqcode - 4
multi-GPU?
#15 opened by jiujing23333 - 2
Why can't I train a model by gan_mnist.py?
#19 opened by csjfwang - 5
After adding self-implemented Layer-Normalization, the backward time of gradient_penalty became large
#10 opened by santisy - 1
why can't use batchnorm layer?
#17 opened by duzeyan - 10
RuntimeError: cuda runtime error (2) : out of memory at /py/conda-bld/pytorch_1493680494901/work/torch/lib/THC/generic/THCStorage.cu:66
#13 opened by clu5 - 10
Fixing the language model code
#11 opened by thvasilo - 1
"autograd" has no attribute 'grad'
#14 opened by ypruan - 3
Zero gradient
#16 opened by elbamos - 2
nans
#12 opened by shinydinosaur