Implementation for custom datasets
Closed this issue · 8 comments
I am trying to implement the algorithm for custom datasets with 10-dimensional numeric data (not images). I have my Dataset
and DataLoader
, but I am not able to use it here, since the implementation involves weak transforms and strong transforms. I believe that this is done for consistency regularization, but I am not sure how to adapt my DataLoader
so that it fits with the rest of the code. Thanks in advance for your help.
Hi,
In effect, we do not apply consistency regularization for tabular datasets. Please see Appendix C.6. Also, we used the full dataset for training without sampling mini-batches.
Best
Thank you for the clarification.
I am seeing ====> Nan detected, return relaxed solution
throughout the training. I have tried running the code several times to see if this is because of some randomness during the training, but this happens every time. In debug mode, I see that c = torch.ones((N, 1)) / N
starts from small values, and then quickly blows up to values in the range of 1e6
. If I understand correctly, then c
is the sampling distribution of the examples in a batch. This should not be allowed to be >1
, but I see wildly large values.
I found the reason of this to be PS = torch.pow(PS, eta)
which results in PS
having values that are ~1e-3
. This is because the predictions from the model are ~1e-1
. After this, the variables r
and c
in the code both blow up since they have PS
in the denominator. I can see why the variables are blowing up, I am not sure how to fix this issue. I have tried setting eta=1
to avoid small values in PS
, but it still does not resolve the issue. Thanks in advance for your help.
Please ignore this warning information. This is our relaxed solution for avoiding numerical issues; see Appendix.
I ignored the message, and trained a model for my dataset. I see that the predictions from the model are always a particular class (about 90% of the time), and I am not sure why this is happening. I have tried different data scaling methods, and different model depths and feature sizes. For all these configurations, I see that the predictions are mostly a single class (although the exact class varies for different models).
Is there any particular reason why this might be happening? Is the estimated prior somehow collapsing to this class? One reason this might be happening is that since the empirical distribution is estimated from the predictions, there might be a positive feedback loop that is reinforcing the model's outputs and it hones in on that one class. But, I am not sure if this is the reason, and how to correct it in case this is true.
Also, according to this comment, nan
should not occur frequently, but I see the message ====> Nan detected, return relaxed solution
printed countless number of times during pre-estimation and final training stages. Do these observations point to an ill-posed problem, which might in turn be causing the model to predict a single class more than 90% of the time? Any insight on why might be happening, and how to correct this would be greatly helpful. Thanks in advance for your help.
As an update, I made a mistake in the data processing step, which caused abnormal outputs. Now, I am getting better results. Still, the model does not predict certain classes for any of the inputs. For instance, out of 9 classes, it never predicts classes 2, 4 and 5 in the entire dataset. Is there any reason why this might be happening? Thanks in advance for your help.
What dataset?
It is an industrial dataset. There was a mistake in the way I was feeding data, which was the cause of fixed outputs. This is fixed, and I am getting consistent results now. Thanks