Update deprecated method in generic.py for compatibility with newer versions of PyTorch
Haleshot opened this issue · 1 comments
Haleshot commented
System Info
Versions
- transformers version: 4.34.1
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.11.4
- Huggingface_hub version: 0.23.0
- Safetensors version: 0.4.3
- Accelerate version: 0.30.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.3.0+cpu (False)
- Tensorflow version (GPU?): 2.16.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.8.3 (cpu)
- Jax version: 0.4.27
- JaxLib version: 0.4.27
- Using GPU in script?: If detected, then yes.
- Using distributed or parallel set-up in script?: No
Who can help?
No response
Information
- The official example scripts
- My own modified scripts
Tasks
- An officially supported task in the
examples
folder (such as GLUE/SQuAD, ...) - My own task or dataset (give details below)
Reproduction
The generic.py
file inside transformers here
Code Sample
class MyModel(AutoModel):
def __init_subclass__(cls) -> None:
"""Register subclasses as pytree nodes.
This is necessary to synchronize gradients when using `torch.nn.parallel.DistributedDataParallel` with
`static_graph=True` with modules that output `ModelOutput` subclasses.
"""
if torch.cuda.is_available():
import torch.utils._pytree
torch.utils._pytree._register_pytree_node(
cls,
torch.utils._pytree._dict_flatten,
lambda values, context: cls(**torch.utils._pytree._dict_unflatten(values, context)),
)
Expected behavior
When initializing MyModel
, there should be no warnings or errors stemming from the usage of deprecated methods such as torch.utils._pytree._register_pytree_node
. The initialization process should proceed smoothly and without interruption.
Haleshot commented
The warning seems to arise due to the library not being updated to the latest version on my end. Closing the issue now.