Pinned Repositories
--STM32_MPU9150
academic-drawing
This is a project providing source codes (including Matlab and Python) for presenting experiment results.
activityrecognition
Information about activity recognition
ad5933
ad5933测试程序
awesome_time_series_in_python
This curated list contains python packages for time series analysis
bridge_host
bridge_slave
bridgehost-v2018
2018年,xiaojun改动后去除任务tpc版本
BridgeSlave-mcu
桥梁电阻抗检测下位机项目
Qt-terminalRes
电阻抗测量Qt上位机
edaworld's Repositories
edaworld/academic-drawing
This is a project providing source codes (including Matlab and Python) for presenting experiment results.
edaworld/activityrecognition
Information about activity recognition
edaworld/bridgehost-v2018
2018年,xiaojun改动后去除任务tpc版本
edaworld/bridgeslave-v2018
2018年,xiaojun改动后去除任务tpc版本
edaworld/cheatsheets
Official Matplotlib cheat sheets
edaworld/Coursera-ML-AndrewNg-Notes
edaworld/ctw
Implementation of Canonical Time Warping
edaworld/Data-Science--Cheat-Sheet
Cheat Sheets
edaworld/ESKF
Raw data and Matlab implementation of BonaDrone's ESKF
edaworld/ESKF-Attitude-Estimation
Error-State KF algorithm to estimate attitude
edaworld/eskf-gps-imu-fusion
使用误差状态卡尔曼滤波器融合GPS和IMU,实现更高精度的定位
edaworld/feature-selection
edaworld/gtw
Implementation of Generalized Time Warping
edaworld/IMU_Attitude_Estimator
Estimate AHRS attitude with EKF, ESKF and Mahony filter.
edaworld/imu_gnss_fusion
IMU+GNSS Fusion Localization with ESKF
edaworld/IMUCalibration-Gesture
calibration for Imu and show gesture
edaworld/Kalman
Some Python Implementations of the Kalman Filter
edaworld/lstm_anomaly_thesis
Anomaly detection for temporal data using LSTMs
edaworld/Machine-Learning-Classification-Regression-Twitter-Buzz
Buzz Prediction on Twitter: Buzz Prediction on Twitter Project Description: There are two different datasets for Regression and Classification tasks. Right-most column in both the datasets is a dependent variable i.e. buzz. Data description files are also provided for both the datasets. Deciding which dataset is for which task is part of the project. Read data into Jupyter notebook, use pandas to import data into a data frame. Preprocess data: Explore data, check for missing data and apply data scaling. Justify the type of scaling used. Regression Task: Apply all the regression models you've learned so far. If your model has a scaling parameter(s) use Grid Search to find the best scaling parameter. Use plots and graphs to help you get a better glimpse of the results. Then use cross-validation to find average training and testing score. Your submission should have at least the following regression models: KNN regressor, linear regression, Ridge, Lasso, polynomial regression, SVM both simple and with kernels. Finally, find the best regressor for this dataset and train your model on the entire dataset using the best parameters and predict buzz for the test_set. Classification Task: Decide about a good evaluation strategy and justify your choice. Find best parameters for the following classification models: KNN classification, Logistic Regression, Linear Support Vector Machine, Kernelized Support Vector Machine, Decision Tree. Which model gives the best results? Buzz Prediction on Twitter Project Description: Use same datasets as Project 2. Run all the models only on 10% data. Use code given in Project 2 for sampling. Preprocess data: Explore data and apply data scaling. Regression Task: Apply any two models with bagging and any two models with pasting. Apply any two models with adaboost boosting Apply one model with gradient boosting Apply PCA on data and then apply all the models in project 2 again on data you get from PCA. Compare your results with results in project 2. You don't need to apply all the models twice. Just copy the result table from project 2, prepare similar table for all the models after PCA and compare both tables. Does PCA help in getting better results? Apply deep learning models covered in class Classification Task: Apply four voting classifiers - two with hard voting and two with soft voting Apply any two models with bagging and any two models with pasting. Apply any two models with adaboost boosting Apply one model with gradient boosting Apply PCA on data and then apply all the models in project 2 again on data you get from PCA. Compare your results with results in project 2. You don't need to apply all the models twice. Just copy the result table from project 2, prepare similar table for all the models after PCA and compare both tables. Does PCA help in getting better results? Apply deep learning models covered in class
edaworld/Making-elegant-Matlab-figures
A repository comprising multiple functions for making elegant publication-quality figures in MATLAB
edaworld/matlab-plot-tools
Various small packages for Matlab
edaworld/ms2deepscore
Deep learning similarity measure for comparing MS/MS spectra with respect to their chemical similarity
edaworld/techxuexi-js
油猴等插件的 学习强国 js 代码 45分/天
edaworld/traj-dist
A python package for computing distance between 2D trajectories.
edaworld/Transfer-Learning-using-Matlab
Transfer Learning for CNN based Image Classification Networks
edaworld/transferlearning
Everything about Transfer Learning and Domain Adaptation--迁移学习
edaworld/tsfresh
Automatic extraction of relevant features from time series:
edaworld/ttk4250-sensor-fusion
Sensor fusion using bayesian probabilistic methods such as the IMM-PDAF, ESKF and EKF-SLAM. Assignment in the TTK4250 sensor fusion course.
edaworld/twitter-sentiment-analysis
Sentiment analysis on tweets using Naive Bayes, SVM, CNN, LSTM, etc.
edaworld/zotero-better-notes
Everything about note management. All in Zotero.