rezaakb/pinns-torch

TypeError: unhashable type: 'list' Raised when the batch_size is set

Opened this issue · 2 comments

Hi, thanks for your efforts on this wonderful project! When I am setting the batch_size in the config.yaml. There is error like this. I have seen the same issue, and tried to set the save_pred: true in the config.yaml, but still the same error.

main.py 94 decorated_main
_run_hydra(

utils.py 394 _run_hydra
_run_app(

utils.py 457 _run_app
run_and_report(

utils.py 223 run_and_report
raise ex

utils.py 220 run_and_report
return func()

utils.py 458 <lambda>
lambda: hydra.run(

hydra.py 132 run
_ = ret.return_value

utils.py 260 return_value
raise self._return_value

utils.py 186 run_job
ret.return_value = task_function(task_cfg)

train_real_temp.py 180 main
metric_dict, _ = pinnstorch.train(

utils.py 85 wrap
raise ex

utils.py 73 wrap
metric_dict, object_dict = task_func(

train.py 174 train
preds_list = trainer.predict(

trainer.py 864 predict
return call._call_and_handle_interrupt(

call.py 44 _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)

trainer.py 903 _predict_impl
results = self._run(model, ckpt_path=ckpt_path)

trainer.py 987 _run
results = self._run_stage()

trainer.py 1028 _run_stage
return self.predict_loop.run()

utilities.py 182 _decorator
return loop_run(self, *args, **kwargs)

prediction_loop.py 124 run
self._predict_step(batch, batch_idx, dataloader_idx, dataloader_iter)

prediction_loop.py 231 _predict_step
batch = call._call_strategy_hook(trainer, "batch_to_device", batch, dataloader_idx=dataloader_idx)

call.py 309 _call_strategy_hook
output = fn(*args, **kwargs)

strategy.py 278 batch_to_device
return model._apply_batch_transfer_handler(batch, device=device, dataloader_idx=dataloader_idx)

module.py 347 _apply_batch_transfer_handler
batch = self._call_batch_hook("transfer_batch_to_device", batch, device, dataloader_idx)

module.py 336 _call_batch_hook
return trainer_method(trainer, hook_name, *args)

call.py 157 _call_lightning_module_hook
output = fn(*args, **kwargs)

pinn_module.py 241 transfer_batch_to_device
self.copy_batch(batch)

pinn_module.py 329 copy_batch
spatial, time, solution = self.static_batch[key]

TypeError:
unhashable type: 'list'

It seems the copy_batch() in pinns_module.py is accepting the last batch but a tuple, here I tried to figure this out by adding a judgement on the type of batch before proceeding the batch copy to self.static_batch. I am still trying to find the source of this issue by going through the LightningModule class. If you have any adivce on this to solve this issue intrinsically, that would be of great help!

Here is my patch on the code may tentatively solve this issue, in pinn_module.py:

  def copy_batch(self, batch) -> None:
      """
      Fills the graph's input memory with new data to compute on. If the batch_size is not
      specified, the model uses all the available data, and there is no need to copy the data.

      :param batch: A batch of data (a tuple) containing the input tensor of images and target labels.
      """
      if isinstance(batch, dict):
          import time as tm
          st = tm.time()
          
          for key in batch:
              spatial, time, solution = self.static_batch[key]
              spatial_new, time_new, solution_new = batch[key]
              time = time.requires_grad_(False).copy_(time_new)
              x = [
                  spatial_.requires_grad_(False).copy_(spatial_new[i])
                  for i, spatial_ in enumerate(spatial)
              ]
              if solution_new is not None:
                  solution = {
                      key_sol: solution[key_sol].copy_(solution_new[key_sol])
                      for key_sol in solution_new
                  }
          
          self.times_batch.append(tm.time() - st)

I've met the same issue those days and it really bothered me. I've searched a lot of content but I still cannot fix the problem. Thank for your contribution and it really help me a lot.