Issue using pre-trained german model
patrickjane opened this issue ยท 22 comments
Not entirely sure if I am doing things right, but,
- installed deepspeech v0.6.1 (
pip3 install deepspeech==0.6.1
) - downloaded pre-trained models for 0.6.1 provided by this repo
- ran
deepspeech
with parameters & input audio
Result:
$ deepspeech --model ../model/0.6/output_graph.pb --lm ../model/0.6/lm.binary --trie ../model/0.6/trie --audio ../../test.wav
Loading model from file ../model/0.6/output_graph.pb
TensorFlow: v1.14.0-21-ge77504a
DeepSpeech: v0.6.1-0-g3df20fe
ERROR: Model provided has model identifier 'inpu', should be 'TFL3'
Error at reading model file ../model/0.6/output_graph.pb
Traceback (most recent call last):
File "/home/pi/deepspeech/deepspeech-6-venv/venv/bin/deepspeech", line 10, in <module>
sys.exit(main())
File "/home/pi/deepspeech/deepspeech-6-venv/venv/lib/python3.7/site-packages/deepspeech/client.py", line 113, in main
ds = Model(args.model, args.beam_width)
File "/home/pi/deepspeech/deepspeech-6-venv/venv/lib/python3.7/site-packages/deepspeech/__init__.py", line 42, in __init__
raise RuntimeError("CreateModel failed with error code {}".format(status))
RuntimeError: CreateModel failed with error code 12288
Not so sure what is going wrong here. Any advice?
As the model was trained for v0.6.0 and deepspeech is not backward compatible, I would recommend testing with deepspeech==0.6.0
.
Same result:
(venv) pi@calypso:~/deepspeech/deepspeech-6-venv $ deepspeech --model ../model/0.6/output_graph.pb --lm ../model/0.6/lm.binary --trie ../model/0.6/trie --audio ../../test.wav
Loading model from file ../model/0.6/output_graph.pb
TensorFlow: v1.14.0-21-ge77504a
DeepSpeech: v0.6.0-0-g6d43e21
ERROR: Model provided has model identifier 'inpu', should be 'TFL3'
Error at reading model file ../model/0.6/output_graph.pb
Traceback (most recent call last):
File "/home/pi/deepspeech/deepspeech-6-venv/venv/bin/deepspeech", line 10, in <module>
sys.exit(main())
File "/home/pi/deepspeech/deepspeech-6-venv/venv/lib/python3.7/site-packages/deepspeech/client.py", line 113, in main
ds = Model(args.model, args.beam_width)
File "/home/pi/deepspeech/deepspeech-6-venv/venv/lib/python3.7/site-packages/deepspeech/__init__.py", line 42, in __init__
raise RuntimeError("CreateModel failed with error code {}".format(status))
RuntimeError: CreateModel failed with error code 12288
(venv) pi@calypso:~/deepspeech/deepspeech-6-venv $
BTW: I get the same error when using deepspeech v0.7.1 and the pre-trained model from this fork.
So I must be doing some obvious mistake I guess.
Could it be that you are trying to run a desktop model as a TFLite one? That doesn't work:
https://discourse.mozilla.org/t/using-0-6-0-model-on-raspberry-pi-4-4gb/49802
Sounds familiar. However,
- I don't know how to influence to either use
tf
ortflite
when I just dopip3 install deepspeech
- This repository does not provide a
.tflite
in https://drive.google.com/drive/folders/1BKblYaSLnwwkvVOQTQ5roOeN0SuQm8qr
So my expectation was that I would just install deepspeech
and then use the model files (output_graph.pb
, lm.binary
, trie
) from https://drive.google.com/drive/folders/1BKblYaSLnwwkvVOQTQ5roOeN0SuQm8qr.
I'm pretty much a newbie in terms of deepspeech, so if this is not the way it is supposed to work, please let me know how the german model provided in https://drive.google.com/drive/folders/1BKblYaSLnwwkvVOQTQ5roOeN0SuQm8qr shall be used correctly :)
Main question is: Is this running on a Raspi? It won't or are you on a server and just had the wrong libs. TFLite models need to be exported from a checkpoint. Checkpoints are available, then build the tflite as described here:
https://discourse.mozilla.org/t/how-to-export-model-normal-and-tflite-from-a-checkpoint/53802
I'm on a raspberry pi, yes.
@AASHISHAG if you still have the machine setup, you could simply do a tflite export from the checkpoint. Looks like there is need for that :-)
Thank you @olafthiele for putting this up. I used the below command to build the model.
./DeepSpeech.py --alphabet_config_path ../dependencies_v0.6.0/alphabet.txt --lm_trie_path ../dependencies_v0.6.0/trie --lm_binary_path ../dependencies_v0.6.0/lm.binary --checkpoint_dir release_v0.6.0 --export_tflite release_v0.6.0_tflite --export_dir release_v0.6.0_tflite
@patrickjane : You can now find tflite version
-- output_graph.tflite
in v0.6.0 release. Hope it works.
Perfect, so now people can use it on their raspis.
Thanks for the follow-up, however now I end up with this:
(venv) pi@calypso:~/deepspeech/model/0.6 $ deepspeech --model output_graph.tflite --lm lm.binary --trie trie --audio ../../../test.wav
Loading model from file output_graph.tflite
TensorFlow: v1.14.0-21-ge77504a
DeepSpeech: v0.6.0-0-g6d43e21
INFO: Initialized TensorFlow Lite runtime.
Segmentation fault
(venv) pi@calypso:~/deepspeech/model/0.6 $
Not so sure how you would get python to core dump like this o_O
gdb
says:
(gdb) run deepspeech --model ../../../model/0.6/output_graph.tflite --lm ../../../model/0.6/lm.binary --trie ../../../model/0.6/trie --audio ../../../test.wav
Starting program: /home/pi/deepspeech/deepspeech-6-venv/venv/bin/python3 deepspeech --model ../../../model/0.6/output_graph.tflite --lm ../../../model/0.6/lm.binary --trie ../../../model/0.6/trie --audio ../../../test.wav
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1".
[Detaching after fork from child process 4772]
[New Thread 0x74cd2460 (LWP 4774)]
[New Thread 0x744d1460 (LWP 4775)]
[New Thread 0x71cd0460 (LWP 4776)]
Loading model from file ../../../model/0.6/output_graph.tflite
TensorFlow: v1.14.0-21-ge77504a
DeepSpeech: v0.6.0-0-g6d43e21
INFO: Initialized TensorFlow Lite runtime.
Thread 1 "python3" received signal SIGSEGV, Segmentation fault.
0x76fbaf98 in memcpy () from /usr/lib/arm-linux-gnueabihf/libarmmem-v7l.so
(gdb) bt
#0 0x76fbaf98 in memcpy () from /usr/lib/arm-linux-gnueabihf/libarmmem-v7l.so
#1 0x00000000 in ?? ()
Can there be binary incompatibility with anything? Maybe this is some tflite
issue?
Could one of you guys try out on a pi?
[edit] same issue with deepspeech v0.6.1.
As there was no tflite version before, we both don't have a use case for it. Please head over to the DeepSpeech forum as this is a known problem. And search there before asking questions, this is a known tflite issue:
https://discourse.mozilla.org/t/android-tflite-model-inference-issues/35173
Alright, then thanks for your support. Looking forward to the v0.7 model.
Releasing v0.7.0 soon.
Do update, in case you have got this working. Keeping this ticket open.
Hello everyone,
facing the same issues with tflite model on pi 3 and on my development pc (MacOs). The .pb Model works fine. Do you guys have a solution?
Did you download the tflite provided in the repo and it has problems? What exactly is your problem?
I downloaded the tflite provided in the repo. On the Raspberry Pi 3B+ i get an "segmentation fault". The english pre-trained scoring Model called "deepspeech-0.7.1.tflite" works fine. On my development laptop i get something similar like "SIGSEGV". I already tried multiple versions to get this provided model work without success. I try to google it, and some users on multiple platforms think its an export error from .pb to tflite, but im not quite sure.
Please read before you try, this is all still in development. 4 comments above you can see that the repo is for 0.6 not 0.7. Incompatible. So change, the DS version or wait ... but please start reading
Im very thankful for the effort to develop a german deepspeech model. I just respond to your question. I know about the incopability between 0,6 and 0.7 ds models so i tried multiple deepspeech versions. And the .pb model works with the right version. So i just wan to inform you that there is a possibilty that the provided tflite model is corrupted or something like that, like the thread creator posted
Hallo, Thanks for sharing this repo, this is really helpful for my study. You mentioned to release model based on 0.7. I am actually waiting for it. Can you help me which time frame do you expect to release model?
@iamronic, @patrickjane, @HannesDittmann : v0.7.4 for deepspeech is now available, along with tflite. Kindly refer the ReadMe.
@AASHISHAG
Possible to provide DeepSpeech German in pbmm format for 0.9 DeepSpeech ?