Question: Output feature shape from segmentation task
hkim716 opened this issue · 0 comments
For segmentation task, human_seg for example, you've used encoder and decoder network, which was called MeshEncoderDecoder. However, the output shape of fe=[12, 8, 2280]
from the decoder was not matched with the input shape x=[12, 5, 2280]
. I think fe
was defined in padding process, called rebuild_features_average
, but how can I recover this fe
to the same shape of original feature inputs x
to compare inputs x
and outputs from decoder
? Can I make the outputs from the decoder same as the form of inputs?
Q2 :
What is the meaning of contents in .obj file?
v 6.656489 173.104050 -4.276626
v 0.113055 178.060135 1.550866
v 2.772762 156.364990 9.712910
f 461 457 330
f 589 420 330
f 420 459 460
Three numbers followed by v
mean x,y,z coordinates of vertices?
Three numbers followed by f
are ingredients of three node # for each face?
And these x,y,z coordinates and node #s were used to generate 5 features as inputs?
Q3 : I don't know where the training loop was defined, such as the loss calculation, zero_grad(), etc.
Thanks !