modelscope/FunASR

微调seaco_paraformer模型单机多卡出现错误

Closed this issue · 2 comments

🐛 Bug

funasr1.1.5 微调热词模型,单机多卡报错,单机单卡可以跑通

To Reproduce

workspace=pwd

which gpu to train or finetune

export CUDA_VISIBLE_DEVICES="0,1,2,3"
gpu_num=$(echo $CUDA_VISIBLE_DEVICES | awk -F "," '{print NF}')

model_name from model_hub, or model_dir in local path

option 1, download model automatically

model_name_or_model_dir="speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch"

data dir, which contains: train.json, val.json

data_dir="data"

train_data="${data_dir}/train.jsonl"
val_data="${data_dir}/val.jsonl"

output_dir="./outputs"
log_file="${output_dir}/log.txt"

mkdir -p ${output_dir}
echo "log_file: ${log_file}"

DISTRIBUTED_ARGS="
--nnodes ${WORLD_SIZE:-1}
--nproc_per_node $gpu_num
--node_rank ${RANK:-0}
--master_addr ${MASTER_ADDR:-127.0.0.1}
--master_port ${MASTER_PORT:-26669}
"

echo $DISTRIBUTED_ARGS

torchrun $DISTRIBUTED_ARGS
../../../funasr/bin/train_ds.py
++model="${model_name_or_model_dir}"
++train_data_set_list="${train_data}"
++valid_data_set_list="${val_data}"
++dataset="AudioDatasetHotword"
++dataset_conf.index_ds="IndexDSJsonl"
++dataset_conf.data_split_num=1
++dataset_conf.batch_sampler="BatchSampler"
++dataset_conf.batch_size=6000
++dataset_conf.sort_size=1024
++dataset_conf.batch_type="token"
++dataset_conf.num_workers=4
++train_conf.max_epoch=50
++train_conf.log_interval=1
++train_conf.resume=true
++train_conf.validate_interval=8000
++train_conf.save_checkpoint_interval=8000
++train_conf.avg_keep_nbest_models_type='loss'
++train_conf.keep_nbest_models=20
++train_conf.avg_nbest_model=10
++train_conf.use_deepspeed=false
++train_conf.deepspeed_config=${deepspeed_config}
++optim_conf.lr=0.0002
++output_dir="${output_dir}" &> ${log_file}

环境安装:​
python==3.12.3
torch==2.3.1
funasr==1.1.5
GPU:A800*4
CUDA:12.1
图片1

Expected behavior

正常训练

遇到同样的问题

Update funasr, modelscope and re-download the model.