librosa
There are 353 repositories under librosa topic.
librosa/librosa
Python library for audio and music analysis
Super-Badmen-Viper/NSMusicS
NSMusicS,Multi platform Multi mode Music Software ,Electron(Vue3+Vite+TypeScript)+.net core+AI
x4nth055/emotion-recognition-using-speech
Building and training Speech Emotion Recognizer that predicts human emotions using Python, Sci-kit learn and Keras
marcogdepinto/emotion-classification-from-audio-files
Understanding emotions from audio files using neural networks and multiple datasets.
Demfier/multimodal-speech-emotion-recognition
Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)
scherroman/mugen
A command-line music video generator based on rhythm
danyalimran93/Music-Emotion-Recognition
A Machine Learning Approach of Emotional Model
KAIST-MACLab/PyTSMod
An open-source Python library for audio time-scale modification.
ewan-xu/LibrosaCpp
LibrosaCpp is a c++ implemention of librosa to compute short-time fourier transform coefficients,mel spectrogram or mfcc
spotify/realbook
Easier audio-based machine learning with TensorFlow.
GianlucaPaolocci/Sound-classification-on-Raspberry-Pi-with-Tensorflow
In this project is presented a simple method to train an MLP neural network for audio signals. The trained model can be exported on a Raspberry Pi (2 or superior suggested) to classify audio signal registered with USB microphone
qlemaire22/speech-music-detection
Python framework for Speech and Music Detection using Keras.
tiagoft/audio_to_midi
(monophonic) audio to midi converter using Python and librosa
MeidanGR/SpeechEmotionRecognition_Realtime
Speech Emotion Recognition (SER) in real-time, using Deep Neural Networks (DNN) of Long Short Memory Term (LSTM).
yeyupiaoling/AudioClassification-PaddlePaddle
基于PaddlePaddle实现的音频分类,支持EcapaTdnn、PANNS、TDNN、Res2Net、ResNetSE等各种模型,还有多种预处理方法
ankurbhatia24/MULTIMODAL-EMOTION-RECOGNITION
Human Emotion Understanding using multimodal dataset.
dodiku/music-synthesis-with-python
Music Synthesis with Python talk, originally given at PyGotham 2017.
kristijanbartol/Deep-Music-Tagger
Music genre classification model using CRNN
Guan-JW/Melody-Note
一个简单的小网页,录入人声哼唱,转化成钢琴音及钢琴谱输出。灵感稍纵即逝,本项目的目标是能够记录下一段小调,以音频形式输入,读取识别其曲调,并制成谱子,最终以钢琴弹奏的形式输出,依此将一些日常生活中的小灵感保存起来,以便日后回忆甚至再创作。
ravising-h/Speech2Face
Image Processing, Speech Processing, Encoder Decoder, Research Paper implementation
cetinsamet/music-genre-classification
Music genre classification from audio spectrograms using deep learning
mariostrbac/environmental-sound-classification
Environmental sound classification with Convolutional neural networks and the UrbanSound8K dataset.
AmritK10/Urban-Sound-Classification
Sound Classification using Neural Networks
Ztrimus/speech-emotion-recognition
Predicting various emotion in human speech signal by detecting different speech components affected by human emotion.
aldente0630/sound-anomaly-detection-with-autoencoders
MIMII Sound Anomaly Detection with AutoEncoders
danyalimran93/Music-Genre-Classification
Classifying English Music (.mp3) files using Music Information Retrieval (MIR), Digital/Audio Signal Processing (DIP) and Machine Learning (ML) Strategies
adzialocha/tomomibot
Artificial intelligence bot for live voice improvisation
hernanrazo/human-voice-detection
Binary classification problem that aims to classify human voices from audio recordings. Implemented using PyTorch and Librosa.
LaoADe/music_point
100行代码实现简单音乐卡点
albincorreya/ChromaCoverId
Methods to compute various chroma audio features and audio similarity measures particularly for the task of cover song identification
clolsonus/VirtualChoir
Automatically sync, mix, and draw virtual choir videos from raw tracks of individual recordings. You may need some singing skills but you don't need video editing skills or additional software.
xiaominfc/melspectrogram_cpp
C/C++实现Python音频处理库librosa中melspectrogram的计算过程
abishek-as/Audio-Classification-Deep-Learning
We'll look into audio categorization using deep learning principles like Artificial Neural Networks (ANN), 1D Convolutional Neural Networks (CNN1D), and CNN2D in this repository. We undertake some basic data preprocessing and feature extraction on audio sources before developing models. As a result, the accuracy, training time, and prediction time of each model are compared. This is explained by model deployment, which allows users to load the desired sound output for each model that is successfully deployed, as will be addressed in more depth later.
anujdutt9/Audio-Scene-Classification
Scene Classification using Audio in the nearby Environment.
rudrajikadra/Speech-Emotion-Recognition-using-Librosa-library-and-MLPClassifier
In this project we use RAVDESS Dataset to classify Speech Emotion using Multi Layer Perceptron Classifier
victor369basu/Audio-Track-Separation
In this Repository, We developed an audio track separator in tensorflow that successfully separates Vocals and Drums from an input audio song track.