Pinned Repositories
9DoF-Features-for-Max
Max abstractions for extracting Jerkiness and Quantity of Motion (QoM) features from 9DoF sensor data (3D accelerometer, 3D gyroscope, 3D magnetometer).
freesound-python
python client for the freesound API
Gestural-Sound-Toolkit
Gestural Sound Toolkit in Max/MSP for easy and fast Gesture-to-Sound scenario prototyping
GIMLeT
GIMLeT – Gestural Interaction Machine Learning Toolkit
KineToolbox
A collection of Max tools for gesture/sound interaction.
max_mc_swarm_polysynth
A polyphonic synth with 32 sawtooth oscillators per voice made with Max 8 mc objects
mubu-recorder
Max patches to record multimodal data to mubu containers
MyoMaxML
Tools for using Myo and machine learning in Max
n4m-posenet
Posenet integration for Node for Max.
RealtimeAudioClassification
Using spectrograms and convolutional neural networks to listen to environment sounds.
federicoVisi's Repositories
federicoVisi/GIMLeT
GIMLeT – Gestural Interaction Machine Learning Toolkit
federicoVisi/mubu-recorder
Max patches to record multimodal data to mubu containers
federicoVisi/Gestural-Sound-Toolkit
Gestural Sound Toolkit in Max/MSP for easy and fast Gesture-to-Sound scenario prototyping
federicoVisi/n4m-posenet
Posenet integration for Node for Max.
federicoVisi/SoundS-gesture-sound-interaction
federicoVisi/app_NatNetThree2OSC
federicoVisi/audio-diffusion
Apply diffusion models using the new Hugging Face diffusers package to synthesize music instead of images.
federicoVisi/audiocraft
Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.
federicoVisi/AudioLDM
AudioLDM: Generate speech, sound effects, music and beyond, with text.
federicoVisi/audiolm-pytorch
Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch
federicoVisi/autocoder
federicoVisi/autocoder_external
federicoVisi/C74-Max-Examples
Examples for using the sensel object in Cycling 74 Max
federicoVisi/DALLE2-pytorch
Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
federicoVisi/federicoVisi
federicoVisi/flow_synthesizer
Universal audio synthesizer control learning with normalizing flows
federicoVisi/jweb-hands-landmarker
A self contained example demonstrating how to use MediaPipe HandLandmarker with Max's jweb connected to either a live webcamera stream or using still images.
federicoVisi/maxdevtools
federicoVisi/maxmsp_ai
Developing deep machine learning models in MaxMSP
federicoVisi/motion-tracking
repository for motion tracking research at the HfMT
federicoVisi/musiclm-pytorch
Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch
federicoVisi/n4m-handpose
Wraps MediaPipe HandPose inside electron and serves the detected parts via MaxAPI.
federicoVisi/o.jm.korgnanokontrol
Odot wrapper for the Korg nanoKONTROL
federicoVisi/o.jm.korgnanopad
federicoVisi/open-musiclm
Implementation of MusicLM, a new text to music model published by Google, with a few modifications.
federicoVisi/rapid
Max implementation of RapidLib
federicoVisi/sample-generator
Tools to train a generative model on arbitrary audio samples
federicoVisi/StyleCLIP-Tutorial
federicoVisi/transfer-m4l
Max4Live device for real-time timbre transfer using RAVE models.
federicoVisi/tuning_playbook
A playbook for systematically maximizing the performance of deep learning models.