About: LM_Chat_TTS_FrontEnd is a simple yet powerful interface for interacting with LM Studio models using text-to-speech functionality. This project is designed to be lightweight and user-friendly, making it suitable for a wide range of users interested in exploring voice interactions with AI models.
LM_Chat_TTS_FrontEnd is a simple yet powerful interface for interacting with LM Studio models using text-to-speech functionality. This project is designed to be lightweight and user-friendly, making it suitable for a wide range of users interested in exploring voice interactions with AI models.
- Download the stand alone HTML file LM_Chat_TTS_FrontEnd.html (created with HTML, JavaScript, CSS)
- Open LM Studios and load your LM of choice
- Test out the model within LM studios to ensure it is working nativly, get the settings you want.
- Go to the Local Server Tab
- Choose the Preset from LM Studios that goes with your model
- Turn on Cross-Origin-Resource-Sharing (CORS)
- Start Server
- Open the HTML Document LM_Chat_TTS_FrontEnd.html in the Edge Browser (For best voice options)
- Type in the System Promt, AI Name, Choose the AI voice, Your your name, and choose a voice for you (optional
- Type your message and hit send
- Select Erase memory if the AI is getting confused to reset memory
- Standalone HTML/JS/CSS File: The entire functionality is encapsulated in a single file for ease of use.
- Voice Chat Capability: Engage in voice chats with LM Studio models.
- Customizable Options: Users can set various parameters including AI's name, voice, voice speed, user's name, and voice speed.
- Local Server Integration: Easily connect to a local server running LM Studio.
- Browser Compatibility: Optimized for Edge browser for a wider range of voice options.
- Clone the Repository: Download the files from the GitHub repository.
- Start LM Studio Local Server: Ensure LM Studio is running locally on your machine.
- Open the HTML File: Use a web browser, preferably Edge, to open the HTML file.
- Configure Settings: Upon opening the file, configure the settings in the menu, including AI and user voice options.
- Enter System Prompts: Customize system prompts as needed for specific interactions.
- Start Chatting: Type messages in the chat box and receive spoken responses from the AI model.
- AI and User's Name: Personalize the names for a more customized experience.
- Voice Selection: Choose from a range of voices for both AI and user.
- Voice Speed Control: Adjust the speed of voice responses.
- Custom Endpoint: Set up a custom endpoint URL for specific server interactions.
- Theme Toggle: Switch between light and dark themes for user comfort.
- Conversation Log: View, evaluate, condense, or clear the conversation log for better data management.
- Auto-Read Functionality: Enable or disable automatic reading of messages.
- Check Browser Compatibility: Ensure you are using the Edge browser for full functionality.
- Verify Local Server: Make sure the LM Studio local server is running correctly.
- Review Console Logs: Use the browser's developer tools to check for any error messages.
Contributions to LM_Chat_TTS_FrontEnd are welcome. Please follow the standard fork-and-pull request workflow. Do not forget to update tests and documentation as necessary.
This project is released under [specify license], which allows for wide usage and contributions while maintaining the necessary legal protections.
Created by: Friend of AI https://www.youtube.com/@friendofai