ToyGeniusLab invites kids into a world where they can create and personalize AI-powered toys. By blending technology with imaginative play, we not only empower young minds to explore their creativity but also help them become comfortable with harnessing AI, fostering tech skills in a fun and interactive way.
- 🎨 Customizable AI Toys: Kids design their toy's personality and interactions.
- 📚 Educational: A hands-on introduction to AI, programming, and technology.
- 💡 Open-Source: A call to the community for ongoing enhancement of software and 3D-printed parts.
- 🤖 Future Enhancements: Plans to add servos, displays, and more for a truly lifelike toy experience.
- Python 3.x
- OpenAI API key
- Eleven Labs API key
git clone https://github.com/sidu/toygeniuslab.git
Navigate to the project directory and run:
pip install -r requirements.txt
Install ffmpeg
brew install ffmpeg
Install mpv
brew install mpv
Before running the project, you'll need to set up two essential environment variables: OPENAI_API_KEY
and ELEVEN_API_KEY
.
- Visit OpenAI API Dashboard to obtain your OpenAI API key.
- Once you have your key, set it as an environment variable. On Unix-based systems, you can use the following command:
On Windows, you can set it through the command prompt:
export OPENAI_API_KEY="your-api-key-here"
set OPENAI_API_KEY=your-api-key-here
- To get the Eleven API key, follow the guide available at Eleven Labs Documentation.
- Similar to the OpenAI API key, set the Eleven API key as an environment variable:
# On Unix-based systems export ELEVEN_API_KEY="your-eleven-api-key-here"
# On Windows set ELEVEN_API_KEY=your-eleven-api-key-here
Now you're ready to run the project with both API keys set up.
python pet.py petergriffin.yaml
Before running the project, make sure you have a portable Bluetooth microphone and speaker connected to your computer. Ensure that they are selected as the default input and output devices. For best experience, we recommend purchasing a mini bluetooth speaker/mic combo, like LEICEX Mini Speaker from Amazon (costs ~$10).
-
Connect your Bluetooth microphone and speaker to your computer following the manufacturer's instructions.
-
On Windows:
- Right-click on the Speaker icon in the taskbar and select "Open Sound settings."
- Under the "Input" section, select your Bluetooth microphone from the dropdown.
- Under the "Output" section, select your Bluetooth speaker from the dropdown.
-
On macOS:
- Open "System Preferences" and click on "Sound."
- Go to the "Input" tab and select your Bluetooth microphone.
- Go to the "Output" tab and select your Bluetooth speaker.
By ensuring these settings, you'll get the optimal audio experience while interacting with the project.
- Download and print the Mario template.
- After pairing a Bluetooth speaker/microphone with your computer, insert it into the paper toy.
- Execute the AI toy program by running python
pet.py mario.yaml
in your terminal. Get ready for interactive fun!
- Begin with downloading the blank template. You can digitally color it or use markers and crayons for a hands-on approach. You can also grab a slightly edited version of it from our repo here (has a blank face for more creative options).
- Insert a Bluetooth speaker/microphone into your custom-designed toy, ensuring it's paired with your computer first.
- Make a copy of an existing toy's config by running
cp mario.yaml mytoy.yaml
. - Update the
system_prompt
property inmytoy.yaml
according to the personality you want your toy to have. - Optionally, update the
voice_id
property inmytoy.yaml
with the value of the voice you'd like your toy to have from ElevenLabs.io. - Activate your AI toy by executing python
pet.py mytoy.yaml
in your terminal. Enjoy your creation's company!
Caught a fun moment with your AI toy? We'd love to see it! Share your experiences and creative toy designs on social media using the hashtag #ToyGeniusLab. Let's spread the joy and inspiration far and wide!
Love ToyGeniusLab? Give us a ⭐ on GitHub to stay connected and receive updates on new features, enhancements, and community contributions. Your support helps us grow and inspire more creative minds!
We're dreaming big for ToyGeniusLab's next steps and welcome your brilliance to bring these ideas to life. Here's what's on our horizon:
- More pets
- Solid local E2E execution: local LLM, local transcription, local TTS
- Local fast transcription and TTS
- SD based generation of custom pets
- Latency improvements
- Interruption handling
- Vision reasoning, with local VLLM support
- Servos for movement
- 3D printable characters
- “Pet in a box” (Raspberry-Pi)
Help shape ToyGeniusLab's tomorrow: Raise PRs for innovative features or spark conversations in our Discussions. 🌟
Overview of how the toy works.
MIT