Lightning-Universe/lightning-transformers

Ver0.2.4 compatibility PL v1.8

Closed this issue ยท 6 comments

Suggest to assign to

Ver0.2.4 new NotImplementedError: LightningDataModule.on_load_checkpoint was deprecated in v1.6 and is no longer supported as of v1.8. Use load_state_dict instead.

๐Ÿ› Bug

model.fit fails owing to Pytorch-Lightning checks failure

> File ~\miniconda3\envs\UnBias-99-5\lib\site-packages\pytorch_lightning\trainer\configuration_validator.py:61, in verify_loop_configurations(trainer)
>      59 _check_deprecated_logger_methods(trainer)
>      60 # TODO: Delete this check in v2.0
> ---> 61 _check_unsupported_datamodule_hooks(trainer)

To Reproduce

trainer.fit(model,datamodel)

> ---------------------------------------------------------------------------
> NotImplementedError                       Traceback (most recent call last)
> Input In [58], in <cell line: 1>()
> ----> 1 trainer.fit(model,dm)
> 
> File ~\miniconda3\envs\MyEnv\lib\site-packages\pytorch_lightning\trainer\trainer.py:579, in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
>     577     raise TypeError(f"`Trainer.fit()` requires a `LightningModule`, got: {model.__class__.__qualname__}")
>     578 self.strategy._lightning_module = model
> --> 579 call._call_and_handle_interrupt(
>     580     self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
>     581 )
> 
> File ~\miniconda3\envs\MyEnv\lib\site-packages\pytorch_lightning\trainer\call.py:38, in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs)
>      36         return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
>      37     else:
> ---> 38         return trainer_fn(*args, **kwargs)
>      40 except _TunerExitException:
>      41     trainer._call_teardown_hook()
> 
> File ~\miniconda3\envs\MyEnv\lib\site-packages\pytorch_lightning\trainer\trainer.py:621, in Trainer._fit_impl(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
>     614 ckpt_path = ckpt_path or self.resume_from_checkpoint
>     615 self._ckpt_path = self._checkpoint_connector._set_ckpt_path(
>     616     self.state.fn,
>     617     ckpt_path,  # type: ignore[arg-type]
>     618     model_provided=True,
>     619     model_connected=self.lightning_module is not None,
>     620 )
> --> 621 self._run(model, ckpt_path=self.ckpt_path)
>     623 assert self.state.stopped
>     624 self.training = False
> 
> File ~\miniconda3\envs\MyEnv\lib\site-packages\pytorch_lightning\trainer\trainer.py:984, in Trainer._run(self, model, ckpt_path)
>     981 self._callback_connector._attach_model_callbacks()
>     982 self._callback_connector._attach_model_logging_functions()
> --> 984 verify_loop_configurations(self)
>     986 # hook
>     987 log.detail(f"{self.__class__.__name__}: preparing data")
> 
> File ~\miniconda3\envs\MyEnv\lib\site-packages\pytorch_lightning\trainer\configuration_validator.py:61, in verify_loop_configurations(trainer)
>      59 _check_deprecated_logger_methods(trainer)
>      60 # TODO: Delete this check in v2.0
> ---> 61 _check_unsupported_datamodule_hooks(trainer)
> 
> File ~\miniconda3\envs\MyEnv\lib\site-packages\pytorch_lightning\trainer\configuration_validator.py:295, in _check_unsupported_datamodule_hooks(trainer)
>     290     raise NotImplementedError(
>     291         "`LightningDataModule.on_save_checkpoint` was deprecated in v1.6 and is no longer supported as of v1.8."
>     292         " Use `state_dict` instead."
>     293     )
>     294 if is_overridden("on_load_checkpoint", datahook_selector.datamodule):
> --> 295     raise NotImplementedError(
>     296         "`LightningDataModule.on_load_checkpoint` was deprecated in v1.6 and is no longer supported as of v1.8."
>     297         " Use `load_state_dict` instead."
>     298     )
> 
> NotImplementedError: `LightningDataModule.on_load_checkpoint` was deprecated in v1.6 and is no longer supported as of v1.8. Use `load_state_dict` instead.

Code sample

import os
from accelerate import (init_empty_weights)
from transformers import (FlaubertTokenizer, FlaubertWithLMHeadModel, TrainingArguments, DataCollatorForLanguageModeling) 
from datasets import (load_dataset)
import pytorch_lightning as pl
from lightning_transformers.task.nlp.masked_language_modeling import (MaskedLanguageModelingTransformer, MaskedLanguageModelingDataModule)

dataset = load_from_disk(os.path.join(drive_letter,os.path.join(dataset_dir, 'dataset')))
dataset = dataset.remove_columns(["text"])
dataset = dataset.shuffle()
dataset.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask', 'special_tokens_mask', 'labels'], device=device) 

LM_tokenizer = FlaubertTokenizer.from_pretrained('./tokenizer/FlauBERT_tokenizer', do_lowercase=False)

with init_empty_weights():
  model = MaskedLanguageModelingTransformer(
              pretrained_model=LMhead_model,
              tokenizer=LM_tokenizer,
              load_weights=False,
              low_cpu_mem_usage=True,
              device_map="auto"
              #deepspeed_sharding=True,  # Linux only, defer initialization of the model to shard/load pre-train weights
          )

batch_size=2

datamodel = MaskedLanguageModelingDataModule(
    batch_size=batch_size,
    dataset=dataset,
    tokenizer=LM_tokenizer,
    num_workers=os.cpu.count())

trainer = pl.Trainer(
    accelerator="auto",
    devices="auto",
    #strategy="deepspeed_stage_3", # linux only
    precision=16,
    max_epochs=1,
    #strategy='dp',
    #auto_lr_find=True,
    #detect_anomaly=True
    #val_check_interval=0
    #progress_bar_refresh_rate=50
)

trainer.fit(model,datamodel)

Expected behavior

Trainer fits

Environment

  • PyTorch Version (e.g., 1.0): 1.2.1
  • OS (e.g., Linux): Windows 10
  • How you installed PyTorch (conda, pip, source): conda

py3.9_cuda11.6_cudnn8_0 pytorch

  • Python version: 3.9
  • CUDA/cuDNN version: CUDA 11.6, cuDNN 8.0
  • GPU models and configuration: NVIDIA Quadro RTX 3000
  • Any other relevant information: none

Additional context

Comes on top of 0.2.3 and despite 0.2.4 release

what's your lightning-transformers version?

lightning-transformers 0.2.4 pypi_0 pypi

okay.. can you share a colab reproducing this issue?

okay.. can you share a colab reproducing this issue?

@rohitgr7 whilst producing a minimalist version for you to test, ended up with another mentioned above issue.

Borda commented

seems that the notebook is not available any more :(
and can't reproduce from the code sample as some functions/classes are missing ๐Ÿฆฆ

seems that the notebook is not available any more :( and can't reproduce from the code sample as some functions/classes are missing ๐Ÿฆฆ

well this repo elected to archive on the day I reported this bug issue so I felt there was no point to it

This repository has been archived (read-only) on Nov 18th, 2022. Thanks to everyone who contributed to lightning- transformers, we feel it's time to move on.