sajjjadayobi/FaceLib

pred = torch.argmax(input[:, :2], dim=1)

Closed this issue · 7 comments

In model.py:

def accuracy_gender(input, targs):
pred = torch.argmax(input[:, :2], dim=1)
y = targs[:, 0]
return torch.sum(pred == y)

I want to know why not pred = input[:,:1] ?
I think first two columns of input are genders and races.

first two columns are logits for (male, female) and the second one is for Age

as you can see I've used last index (aka 3) for age

def l1loss_age(input, targs):
    return F.l1_loss(input[:, -1], targs[:, -1]).mean()

first two columns are logits for (male, female) and the second one is for Age

as you can see I've used last index (aka 3) for age

def l1loss_age(input, targs):
    return F.l1_loss(input[:, -1], targs[:, -1]).mean()

Thanks for your reply!

first two columns are logits for (male, female) and the second one is for Age
as you can see I've used last index (aka 3) for age

def l1loss_age(input, targs):
    return F.l1_loss(input[:, -1], targs[:, -1]).mean()

Thanks for your reply!

In Age & Gender Estimation,did you use the original image of UTKFace Dataset to train ShufflenetFull ?

Although I do not know what you mean by original, I train on UTKFace Dataset with ShufflenetFull as the backbone

Although I do not know what you mean by original, I train on UTKFace Dataset with ShufflenetFull as the backbone

Thanks!I see what you mean.
I have another question to ask:

def multitask_loss(input, target):
input_gender = input[:, :2]
input_age = input[:, -1]
loss_gender = F.cross_entropy(input_gender, target[:, 0].long())
loss_age = F.l1_loss(input_age, target[:, 2])

return loss_gender / (.16) + loss_age * 2

I want to know "loss_gender / (.16) + loss_age * 2",how is the ratio determined? Is it from the experiment?

Yep, the loss ratio results from some experiments
Depending on how much you care about gender detection or age estimation.

Yep, the loss ratio results from some experiments
Depending on how much you care about gender detection or age estimation.

Thank you for your patience!