Within this project I aim to explore the symbiotic relationships between gesture and sound; gesture specifically emanant of the face. Exploring how to pull emotional content (within the sonic domain), through application of, data structures, regression, and DSP practice; this project serves as a means of facilitating the toolset needed to allow the to composer represent an emotion with an interface as hands off as frowning and immediately being drawn into aural dread, or smiling and being embraced by granular euphoria.
- line 308 of the js script inside the jsui object found in
main.maxpat
may need to be commented out; more on this inside the patch.
flucoma max external (not in package manager) https://github.com/flucoma/flucoma-max
cnmat externals (available in package manager)
faceOSC
Kyle Mcdonald, FaceOSC, 2020, ofxFaceTracker, v1.2, https://github.com/kylemcdonald/ofxFaceTracker/releases
Owen Green/Gerard Roma/Pierre Alexandre Tremblay/James Bradbury/Francesco Cameli/Alex Harker/Ted Moore, flucoma-max, 2021, flucoma, v1.0.0-TB2-beta4, https://github.com/flucoma/flucoma-max
Michael Zbyszynski/Matt Wright, OSC-route, 2000-08, University of California, CNMAT Externals v1.04b-25-g23e810fa, https://cnmat.berkeley.edu/downloads