ModuleNotFoundError | Step 4 of Custom Dataset
Opened this issue · 5 comments
641i130 commented
I've gotten to step 4 of making a custom dataset (skipping the LJ Speech and VCTK steps) and I've stumbled across a ModuleNotFoundError
. I'm not too sure how this is happening.
(vits2) root@hugeserver:/mnt/vits2# python preprocess/mel_transform.py --data_dir audio/ -c datasets/custom_james_voices/config.yaml
Traceback (most recent call last):
File "/mnt/vits2/preprocess/mel_transform.py", line 13, in <module>
from utils.hparams import get_hparams_from_file, HParams
ModuleNotFoundError: No module named 'utils.hparams'; 'utils' is not a package
(vits2) root@hugeserver:/mnt/vits2# ls
audio data_utils.py inference_batch.ipynb LICENSE model README.md text train.py
datasets figures inference.ipynb losses.py preprocess requirements.txt train_ms.py utils
(vits2) root@hugeserver:/mnt/vits2#
Other useful information that might help:
Ubuntu 22.04.3 LTS
RTX 3090
We're using conda as the steps showed in the readme.
# echo $PYTHONPATH
results in:
/mnt/vits
641i130 commented
Additionally, it seems config.yaml is being parsed as a json file still.
p0p4k commented
Make an empty init.py file under utilts folder.
K2O7I commented
You can try to put sys.path.append("mnt/vits2")
inside mel_transform.py
HuuHuy227 commented
any solution for this?
brambox commented
if its jupyter notebook and colab try
import os
os.environ['PYTHONPATH'] = "yourpatchtovits2folder"