yeyupiaoling/Whisper-Finetune

ubuntu20.04环境问题

Closed this issue · 8 comments

按您的操作,ubuntu20.04无法运行训练。能提供一个环境安装指导吗?

您好,我ubuntu20.04步骤:
1、用ubuntu-drivers devices命令,得到nvidia-driver-535 - distro non-free recommended,用sudo apt-get install nvidia-driver-535 安装了显卡驱动。
2、bash安装Anaconda3-2021.11.sh(python 3.9.7, pip 21.2.4), sudo reboot
3、base环境下python -m pip install --upgrade pip更新pip 24.0
4、base环境下conda install pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=11.8 -c pytorch -c nvidia
5、base环境下cd Whisper-Finetune python -m pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
6、执行CUDA_VISIBLE_DEVICES=0 python finetune.py --base_model=openai/whisper-tiny --output_dir=output/报错
----------- Configuration Arguments -----------
train_data: dataset/train.json
test_data: dataset/test.json
base_model: openai/whisper-base
output_dir: output/
warmup_steps: 50
logging_steps: 100
eval_steps: 2000
save_steps: 2000
num_workers: 16
learning_rate: 0.001
min_audio_len: 0.5
max_audio_len: 30
use_adalora: True
fp16: True
use_8bit: False
timestamps: True
use_compile: True
local_files_only: False
num_train_epochs: 3
language: None
task: transcribe
augment_config_path: None
resume_from_checkpoint: None
per_device_train_batch_size: 6
per_device_eval_batch_size: 8
gradient_accumulation_steps: 1

读取数据列表: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 22976/22976 [00:00<00:00, 160482.02it/s]
读取数据列表: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 699065/699065 [00:06<00:00, 104571.55it/s]
训练数据:22976,测试数据:644753
加载LoRA模块...
adding LoRA modules...
['k_proj', 'q_proj', 'v_proj', 'out_proj', 'fc1', 'fc2']

trainable params: 1,623,168 || all params: 74,217,184 || trainable%: 2.1870514515883546

/home/jky/anaconda3/lib/python3.9/site-packages/accelerate/accelerator.py:432: FutureWarning: Passing the following arguments to Accelerator is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches', 'even_batches', 'use_seedable_sampler']). Please pass an accelerate.DataLoaderConfiguration instead:
dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=False, even_batches=True, use_seedable_sampler=True)
warnings.warn(
0%| | 0/11490 [00:00<?, ?it/s][2024-03-24 08:38:46,428] [9/0] torch._dynamo.output_graph: [WARNING] nn.Module forward/_pre hooks are only partially supported, and were detected in your model. In particular, if you do not change/remove hooks after calling .compile(), you can disregard this warning, and otherwise you may need to set torch._dynamo.config.skip_nnmodule_hook_guards=False to ensure recompiling after changing hooks.See https://pytorch.org/docs/master/compile/nn-module.html for more information and limitations.
[2024-03-24 08:39:08,030] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64)
[2024-03-24 08:39:08,030] torch._dynamo.convert_frame: [WARNING] function: 'getitem' (/home/jky/anaconda3/lib/python3.9/site-packages/torch/nn/modules/container.py:737)
[2024-03-24 08:39:08,030] torch._dynamo.convert_frame: [WARNING] to diagnose recompilation issues, set env variable TORCHDYNAMO_REPORT_GUARD_FAILURES=1 and also see https://pytorch.org/docs/master/compile/troubleshooting.html.
Traceback (most recent call last):
File "/home/jky/Whisper-Finetune/finetune.py", line 155, in
trainer.train(resume_from_checkpoint=args.resume_from_checkpoint)
File "/home/jky/anaconda3/lib/python3.9/site-packages/transformers/trainer.py", line 1780, in train
return inner_training_loop(
File "/home/jky/anaconda3/lib/python3.9/site-packages/transformers/trainer.py", line 2118, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/transformers/trainer.py", line 3036, in training_step
loss = self.compute_loss(model, inputs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/transformers/trainer.py", line 3059, in compute_loss
outputs = model(**inputs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/accelerate/utils/operations.py", line 822, in forward
return model_forward(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/accelerate/utils/operations.py", line 810, in call
return convert_to_fp32(self.model_forward(*args, **kwargs))
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/amp/autocast_mode.py", line 16, in decorate_autocast
return func(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 328, in _fn
return fn(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/peft/peft_model.py", line 561, in forward
with self._enable_peft_forward_hooks(*args, **kwargs):
File "/home/jky/anaconda3/lib/python3.9/site-packages/peft/peft_model.py", line 563, in
return self.get_base_model()(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 1754, in forward
outputs = self.model(
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 1612, in forward
encoder_outputs = self.encoder(
File "/home/jky/anaconda3/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 1628, in
decoder_outputs = self.decoder(
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 1444, in forward
layer_outputs = decoder_layer(
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 870, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 298, in forward
query_states = self.q_proj(hidden_states) * self.scaling
File "/home/jky/anaconda3/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 323, in
key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
File "/home/jky/anaconda3/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 324, in
value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 490, in catch_errors
return callback(frame, cache_entry, hooks, frame_state)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 641, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 133, in _fn
return fn(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 389, in _convert_frame_assert
return _compile(
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 569, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 491, in compile_inner
out_code = transform_code_object(code, transform)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 458, in transform
tracer.run()
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2074, in run
super().run()
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 724, in run
and self.step()
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 688, in step
getattr(self, inst.opname)(inst)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1103, in COMPARE_OP
BuiltinVariable(supported_any[op], **options).call_function(
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/variables/builtin.py", line 618, in call_function
result = handler(tx, *args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/variables/builtin.py", line 1356, in _comparison
return BaseListVariable.list_compare(tx, op, left, right)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/dynamo/variables/lists.py", line 149, in list_compare
return BuiltinVariable(operator.not
).call_function(tx, [eq_result], {})
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/variables/builtin.py", line 618, in call_function
result = handler(tx, args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/dynamo/variables/builtin.py", line 1436, in call_not
return SymNodeVariable.create(
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/variables/tensor.py", line 676, in create
sym_num = get_fake_value(proxy.node, tx)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1376, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.traceback) from None
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1337, in get_fake_value
return wrap_fake_exception(
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 916, in wrap_fake_exception
return fn()
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1338, in
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/dynamo/utils.py", line 1410, in run_node
raise RuntimeError(fn_str + str(e)).with_traceback(e.traceback) from e
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/dynamo/utils.py", line 1397, in run_node
return node.target(*args, **kwargs)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/init.py", line 352, in bool
return self.node.bool
()
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/fx/experimental/symbolic_shapes.py", line 972, in bool

return self.guard_bool("", 0)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/fx/experimental/symbolic_shapes.py", line 954, in guard_bool
r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/fx/experimental/symbolic_shapes.py", line 3536, in evaluate_expr
self._maybe_guard_eq(sympy.Eq(expr, concrete_val), True)
File "/home/jky/anaconda3/lib/python3.9/site-packages/torch/fx/experimental/symbolic_shapes.py", line 3331, in _maybe_guard_eq
assert len(free) > 0, f"The expression should not be static by this point: {expr}"
torch._dynamo.exc.TorchRuntimeError: Failed running call_function (
(Eq(s1, 43) & Eq(s3, 43),), **{}):
The expression should not be static by this point: False

from user code:
File "/home/jky/anaconda3/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 351, in
if attention_mask.size() != (bsz, 1, tgt_len, src_len):

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information

You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True

0%| | 0/11490 [00:44<?, ?it/s]

nvidia-smi
Sun Mar 24 09:57:40 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.161.07 Driver Version: 535.161.07 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 4090 Off | 00000000:4B:00.0 Off | Off |
| 36% 38C P8 28W / 450W | 844MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA GeForce RTX 4090 Off | 00000000:B1:00.0 Off | Off |
| 36% 34C P8 19W / 450W | 11MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 2500 G /usr/lib/xorg/Xorg 9MiB |
| 0 N/A N/A 2682 G /usr/bin/gnome-shell 8MiB |
| 0 N/A N/A 4242 G /usr/lib/xorg/Xorg 812MiB |
| 1 N/A N/A 2500 G /usr/lib/xorg/Xorg 4MiB |
+---------------------------------------------------------------------------------------+

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

这样错误信息,最后截个图就行了。

看情况是构建模型就有问题了?你的模型是自己下载的吗?是否正确?还有Pytorch版本是多少?Transformers是多少?