Hello, I have a question about the attribute vector.
Closed this issue · 2 comments
Hi, Thank you for the good work! I read your interesting work.
After reading the paper and the code, I have a question about the attribute vector.
My understanding is that your code uses a different value of attribute vector in the training and test phase. (I read all of the issues including #13 #18 #21 etc.)
The explanation about my understanding :
In the training phase, you might use the attribute vector with range [-0.5, 0.5] (https://github.com/csmliu/STGAN/blob/master/train.py#L177, read [0, 1] then (*2 - 1) * thres_int => [-1, 1] * 0.5 = [-0.5, 0.5] ) Therefore, the difference vector ranges [-1, 1] (-1 or 0 or 1)
In contrast, in the test phase, you seem to use the attribute vector ranged [-1.5, 1.5]. (https://github.com/csmliu/STGAN/blob/master/train.py#L360, read [0, 1] then (*2 -1) * thres_int * test_int / thres_int => [-1, 1] for _b_sample_ipt and [-0.5, 0.5] for raw_b_sample)
Therefore, test_label in L280 ranges [-1.5, 1.5]
For example, in training, a_ = [-0.5, 0.5, -0.5] , b_ = [0.5, -0.5, -0.5] then, the difference vector is
b_-a_ = [1.0, -1.0, 0.0].
In the test phase, test_label = _b_sample - raw_b_sample
Let b_sample_ipt_list[0] = [1, 0, 1], then raw_b_sample = [0.5, -0.5, 0.5] and b_sample_ipt_list[1] = [0, 0, 1].
Therefore, _b_sample_ipt[1] = [-0.5, -0.5, 0.5] in L358.
In L360, _b_sample_ipt[1] = [-1.0, -0.5, 0.5].
So, test_label[1] = [-1.0, -0.5, 0.5] - [0.5, -0.5, 0.5] = [-1.5, 0.0, 0.0]
Actually, I cannot reproduce the results without the trick. However, I found that the trick drastically changes the performance of the model.
Is my understanding correct? If so, please let me know. (I just want to use your work as a baseline for my future research. So I must clearly understand your work.)
Thanks!
When doing this work, I modified the code of AttGAN, and so sorry for the inconvenience.
Your understanding about training is right, there is a trade-off between the image quality and attribute accuracy (it's also affected by the trade-off between the reconstruction loss and classifier loss).
For the inference phase, using your example and let a_sample_ipt = b_sample_ipt_list[0] = [1, 0, 1] (1)
L144, Now we are changing the first attribute and the index is 1
, therefore b_sample_ipt_list[1] = [0, 0, 1] (2)
L158.
We have raw_a_sample_ipt = [0.5, -0.5, 0.5] (3)
(From (1) and L163-164).
Then, _b_sample_ipt = [-0.5, -0.5, 0.5] (4)
(From (2) and L166), and it is modified in L172, i.e., _b_sample_ipt = [-1, -0.5, 0.5] (5)
since test_int = 2
.
Finally, in L173-175, we can get _b_sample = _b_sample_ipt = [-1, -0.5, 0.5] (6)
(From (5)) and raw_b_sample = raw_a_sample_ipt = [0.5, -0.5, 0.5] (7)
(From (3)). Thus, the test label is _b_sample - raw_b_sample = [-1.5, 0, 0]
, which is identical with the validation phase in train.py.
Sorry again for the inconvenience.
@csmliu Thanks for the kind reply! I fully understand the scheme of the training and the inference with your comment. Again, I appreciate your interesting work. :)