Concept: using WebRTC and the Web Audio API, take input from a user's camera and generate synthesized audio based on the current camera frame's properties (brightness, color profile, etc.).
Right now it's very rudementary, but everything starts somewhere.
- run
npm install
to install packages and build minified source - run
npm start
- browse to http://localhost:8000
- Turn down your speakers/headphones JUST IN CASE.
- Allow access to your camera.
- Listen, fiddle with your camera, enjoy.