QData/spacetimeformer

errors when training for custom dataset

harshnandwana opened this issue · 3 comments

Sanity Checking DataLoader 0: 0%| | 0/2 [00:00<?, ?it/s]Traceback (most recent call last):
File "/media/spacetimeformer/spacetimeformer/train.py", line 851, in
main(args)
File "/media/spacetimeformer/spacetimeformer/train.py", line 831, in main
trainer.fit(forecaster, datamodule=data_module)
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 608, in fit
call._call_and_handle_interrupt(
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 38, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _fit_impl
self._run(model, ckpt_path=self.ckpt_path)
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1103, in _run
results = self._run_stage()
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1182, in _run_stage
self._run_train()
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1195, in _run_train
self._run_sanity_check()
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1267, in _run_sanity_check
val_loop.run()
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 152, in advance
dl_outputs = self.epoch_loop.run(self._data_fetcher, dl_max_batches, kwargs)
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 137, in advance
output = self._evaluation_step(**kwargs)
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 234, in _evaluation_step
output = self.trainer._call_strategy_hook(hook_name, *kwargs.values())
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1485, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 390, in validation_step
return self.model.validation_step(*args, **kwargs)
File "/media/spacetimeformer/spacetimeformer/forecaster.py", line 256, in validation_step
stats = self.step(batch, train=False)
File "/media/spacetimeformer/spacetimeformer/spacetimeformer_model/spacetimeformer_model.py", line 183, in step
loss_dict = self.compute_loss(
File "/media/spacetimeformer/spacetimeformer/spacetimeformer_model/spacetimeformer_model.py", line 247, in compute_loss
class_loss, acc = self.classification_loss(logits=logits, labels=labels)
File "/media/spacetimeformer/spacetimeformer/spacetimeformer_model/spacetimeformer_model.py", line 219, in classification_loss
acc = torchmetrics.functional.accuracy(
TypeError: accuracy() missing 1 required positional argument: 'task'

Hi, have you figure it out? I tried with 'binary', ''multiclass' and 'multilabel' as documented in https://torchmetrics.readthedocs.io/en/stable/classification/accuracy.html, but they all failed.

I got this same error but was able to fix it by replacing this with:

preds = torch.softmax(logits, dim=1)
targets = labels
acc = torchmetrics.functional.classification.accuracy(
    preds,
    targets,
    task="multiclass",
    num_classes=preds.size(dim=1)
)

@josh0tt
Hi, may I ask how do you modify the code to use custom data set? I'm trying to train on my own data.