LINCellularNeuroscience/VAME

RuntimeError: input.size(-1) must be equal to input_size. Expected 10, got 12

Virginia9733 opened this issue · 5 comments

Hi,
I am using VAME to analyse my own video.
However, when I run vame.train_model(config), I got the error message:
"RuntimeError: input.size(-1) must be equal to input_size. Expected 10, got 12"

I tried to fix it following the suggestion mentioned in #27
"try to set the num_features in your config.yaml" However, in my config.yaml, the num_features is exactly 12, I tried to changed it to 10, but it will return exactly the same error message:

I don't know how to fix it, any help would be much appreciated.

In [16]: vame.train_model(config)
Train Variational Autoencoder - model name: VAME

Latent Dimensions: 30, Time window: 30, Batch Size: 256, Beta: 1, lr: 0.0005

Initialize train data. Datapoints 8004
Initialize test data. Datapoints 889
Scheduler step size: 100, Scheduler gamma: 0.20

Start training...
Epoch: 1

RuntimeError Traceback (most recent call last)
in
----> 1 vame.train_model(config)

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/vame-1.0-py3.7.egg/vame/model/rnn_vae.py in train_model(config)
336 FUTURE_STEPS, scheduler, MSE_REC_REDUCTION,
337 MSE_PRED_REDUCTION, KMEANS_LOSS, KMEANS_LAMBDA,
--> 338 TRAIN_BATCH_SIZE, noise)
339
340 current_loss, test_loss, test_list = test(test_loader, epoch, model, optimizer,

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/vame-1.0-py3.7.egg/vame/model/rnn_vae.py in train(train_loader, epoch, model, optimizer, anneal_function, BETA, kl_start, annealtime, seq_len, future_decoder, future_steps, scheduler, mse_red, mse_pred, kloss, klmbda, bsize, noise)
120
121 if future_decoder:
--> 122 data_tilde, future, latent, mu, logvar = model(data_gaussian)
123
124 rec_loss = reconstruction_loss(data, data_tilde, mse_red)

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/vame-1.0-py3.7.egg/vame/model/rnn_model.py in forward(self, seq)
168
169 """ Encode input sequence """
--> 170 h_n = self.encoder(seq)
171
172 """ Compute the latent state via reparametrization trick """

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/vame-1.0-py3.7.egg/vame/model/rnn_model.py in forward(self, inputs)
34
35 def forward(self, inputs):
---> 36 outputs_1, hidden_1 = self.encoder_rnn(inputs)#UNRELEASED!
37
38 hidden = torch.cat((hidden_1[0,...], hidden_1[1,...], hidden_1[2,...], hidden_1[3,...]),1)

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/torch/nn/modules/rnn.py in forward(self, input, hx)
817 hx = self.permute_hidden(hx, sorted_indices)
818
--> 819 self.check_forward_args(input, hx, batch_sizes)
820 if batch_sizes is None:
821 result = _VF.gru(input, hx, self._flat_weights, self.bias, self.num_layers,

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/torch/nn/modules/rnn.py in check_forward_args(self, input, hidden, batch_sizes)
224
225 def check_forward_args(self, input: Tensor, hidden: Tensor, batch_sizes: Optional[Tensor]):
--> 226 self.check_input(input, batch_sizes)
227 expected_hidden_size = self.get_expected_hidden_size(input, batch_sizes)
228

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/torch/nn/modules/rnn.py in check_input(self, input, batch_sizes)
202 raise RuntimeError(
203 'input.size(-1) must be equal to input_size. Expected {}, got {}'.format(
--> 204 self.input_size, input.size(-1)))
205
206 def get_expected_hidden_size(self, input: Tensor, batch_sizes: Optional[Tensor]) -> Tuple[int, int, int]:

RuntimeError: input.size(-1) must be equal to input_size. Expected 10, got 12

Hi,
Could you please check if your setting of "num_features" in your config.yaml matches the number of dimensions of your input time series data?
Best,
Pavol

Hi,
Could you please check if your setting of "num_features" in your config.yaml matches the number of dimensions of your input time series data?
Best,
Pavol

Sorry for the late reply. In my config.yaml, the num_features is exactly 12, I tried to change it to 10, but it will return exactly the same error message:

RuntimeError: input.size(-1) must be equal to input_size. Expected 10, got 12

Sorry but I am not quite sure about the number of dimensions, and what num_features represents. Is it relevant to the body parts that are labeled?

Hi there,
Exactly, num_features is the number of body parts labeled by DLC times 2 (for the x and y axis). Just count the columns in your CSV file and set the num_features parameter accordingly.
Best,
Pavol

Thank you for your kind reply. I checked that in my DLC .csv, I labeled 7 body parts, which means, my num_features in config.yaml should be 14.
I also noticed that I also need to change the pose_ref_index=[0,5] to pose_ref_index=[0,6] in vame.egocentric_alignment(config, pose_ref_index=[0,6]), since I have 7 body parts.
But still, quite weirdly, after running train_model, I still got the same error message.

Screen Shot 2021-06-04 at 11 19 31 PM

I know what went wrong.
No matter how I changed num_features in the config.yaml, if I run vame.update_config(config), it will always change the num_features = 12.
I don't know why this function reset the num_features = 12 arbitrarily, but if I did not update the config, and then following the instruction, I could successfully train the model.
Thank you!