Demo that combines Whisper for voice recognition and Google TTS for voice synthesis to interact with Alpaca-LoRA.
Warning
This project is significantly outdated and may no longer operate as expected.
- Voice recognition using Whisper with choice of size
- LLaMa 7B language model configurable from the interface
- Voice synthesis using Google Text-to-Speech
- Graphical interface using gradio
- Conversation history available
- Conversation reset function
- Improve language model
- Use advanced AI for voice synthesis
- Optimize the code and ensure its compatibility on different platforms, including Windows, Linux, etc.
- Add image generation and recognition as additional functionality using Stable Diffusion
To use the demo, you need to have access to a microphone. When running all the code, a graphical interface will open in which you can speak into the microphone and get a response from the Alpaca-LoRA AI.
In the graphical interface, you can select the size of the Whisper model to use (tiny, base, small, medium, large). The model size affects the response time of the AI and the quality of the generated response. You can manually change the temperature of the Alpaca-Lora model and reset the conversation.
Alpaca-LoRA is used as the language model. The Transformers library from Hugging Face is used for the model.
Whisper voice recognition technology from OpenAI and Google Text-to-Speech voice synthesis technology are also used.
The graphical interface is built using the Gradio library.
If you like this project or have been helped in some way, consider buying me a coffee as a form of support. This way, I can dedicate more time to open source projects like this and improve them even further :)
You can view the full license here
This project is licensed under the terms of the MIT license.