rasbt/LLMs-from-scratch

Section 2.6 (41) RuntimeError

lmw4051 opened this issue · 2 comments

Bug description

dataloader = create_dataloader_v1(
   raw_text, batch_size=8, max_length=4, stride=4,
   shuffle=False
)

data_iter = iter(dataloader)
inputs, targets = next(data_iter)
print("Inputs:\n", inputs)
print("\nTargets:\\n", targets)

The RuntimeError from Google colab is listed below:

RuntimeError                              Traceback (most recent call last)
[<ipython-input-81-5551546cf2e9>](https://localhost:8080/#) in <cell line: 7>()
      5 
      6 data_iter = iter(dataloader)
----> 7 inputs, targets = next(data_iter)
      8 print("Inputs:\n", inputs)
      9 print("\nTargets:\\n", targets)

7 frames
[/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/collate.py](https://localhost:8080/#) in collate_tensor_fn(batch, collate_fn_map)
    212         storage = elem._typed_storage()._new_shared(numel, device=elem.device)
    213         out = elem.new(storage).resize_(len(batch), *list(elem.size()))
--> 214     return torch.stack(batch, 0, out=out)
    215 
    216 

RuntimeError: stack expects each tensor to be equal size, but got [5] at entry 0 and [1] at entry 1

What operating system are you using?

macOS

Where do you run your code?

Google Colab

Environment




Hi there,

There could be an issue with the dataset formatting. Could you share a bit more context of how your dataset and data loader were defined? Maybe ideally the Google Colab notebook?

Also, if you can share the output of

print(raw_text[-100:])

and

print(len(raw_text))

that'd be useful.

I assume this was probably due to a typo and I am closing it, but please feel free to reopen in case the issue still persists!