feature-learning
There are 38 repositories under feature-learning topic.
KaiyangZhou/pytorch-center-loss
Pytorch implementation of Center Loss
pathak22/unsupervised-video
[CVPR 2017] Unsupervised deep learning using unlabelled videos on the web
antao97/UnsupervisedPointCloudReconstruction
Experiments on unsupervised point cloud reconstruction.
JuanDuGit/DH3D
DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DOF Relocalization
rajarsheem/libsdae-autoencoder-tensorflow
A simple Tensorflow based library for deep and/or denoising AutoEncoder.
zhulf0804/GCNet
Leveraging Inlier Correspondences Proportion for Point Cloud Registration. https://arxiv.org/abs/2201.12094.
getml/getml-community
Fast, high-quality forecasts on relational and multivariate time-series data powered by new feature learning algorithms and automated ML.
mims-harvard/ohmnet
OhmNet: Representation learning in multi-layer graphs
xyj77/MCF-3D-CNN
Temporal-spatial Feature Learning of DCE-MR Images via 3DCNN
bio-ontology-research-group/walking-rdf-and-owl
Feature learning over RDF data and OWL ontologies
yafangshih/Deep-COOC
Deep Co-occurrence Feature Learning for Visual Object Recognition (CVPR 2017)
cswluo/SEF
Code for paper "Learning Semantically Enhanced Feature for Fine-grained Image Classification"
CarsonScott/Competitive-Feature-Learning
Online feature-extraction and classification algorithm that learns representations of input patterns.
LFhase/FeAT
[NeurIPS 2023] Understanding and Improving Feature Learning for Out-of-Distribution Generalization
manhph2211/Self-Supervised-Distillation
Easy-to-read implementation of self-supervised learning using vision transformer and knowledge distillation with no labels - DINO :smiley:
ml-uol/prosper
A Python Library for Probabilistic Sparse Coding with Non-Standard Priors and Superpositions
antao97/PointCloudSegmentation
Experiments on point cloud segmentation.
sjenni/LearningToSpotArtifacts
Self-Supervised Feature Learning by Learning to Spot Artifacts. In CVPR, 2018.
dreizehnutters/pcapAE
convGRU based autoencoder for unsupervised & spatial-temporal anomaly detection in computer network (PCAP) traffic.
sjenni/LCI
Steering Self-Supervised Feature Learning Beyond Local Pixel Statistics. In CVPR, 2020.
zoli333/Center-Loss
This is an implementation of the Center Loss article (2016).
rikturr/mml-feature-learning
Miami Machine Learning Meetup - Feature Learning with Matrix Factorization and Neural Networks
GabrielFernandezFernandez/SPIVAE
Stochastic processes insights from VAE. Code for the paper: Learning minimal representations of stochastic processes with variational autoencoders.
hmohebbi/CNN_featureLearning_SVM_classifier
Image Classification via Transfer Learning: Using Pre-trained Densely Connected Convolutional Network (DenseNet) weights
zacrash/world-models-feature-learning
Experiment with World Models by Ha et al. using Variational Recurrent Neural Networks for more task relevant feature learning
eigenvivek/Grad-CAMO
[CVPRW 2024] Learning interpretable single-cell morphological profiles from 3D Cell Painting z-stacks
DocsaidLab/DocClassifier
A zero-shot document classifier.
INSPIRE-Lab-US/LSR-dictionary-learning
Associated codebase for the paper "Learning Mixtures of Separable Dictionaries for Tensor Data: Analysis and Algorithms"
alex-kom/Cluster-HyperEnsembles
Ensembles and hyperparameter optimization for clustering pipelines.
EEE17A/DNN-for-NILM
In this project, we've tried applying various DNNs to the problem of non-intrusive load monitoring (NILM) and compared their results for various appliances using the REDD dataset. We took a sliding window approach in hopes that we'll be able to achieve real time disaggregation with further tuning and testing. We compare the disaggregated energy consumption results based on MSE, MAE, Relative Error and F1 Score.
sedflix/tripletgan.pytorch
Implementation of the paper Training Triplet Networks with GAN
AntonioScl/SGD_learning_regimes
Code for reproducing the paper "Dissecting the Effects of SGD Noise in Distinct Regimes of Deep Learning"
cocoakang/colmap_multichannel
A modified COLMAP to take as input multi-channel images. It can be used to evaluate the proposed multi-channel feature/descriptor.
lelea2/kdao
Collections of my personal prototypes for works, hackathon and personal project
swastishreya/Feature-Learning
We aim to illustrate the difference between feature extraction and feature learning. We see that when using classical machine learning models, there is a requirement to come up with features (input to the model) “explicitly”, that would give the best and suitable output for the task in hand. However, when using deep learning models, these features are derived “implicitly” by the model as the training progresses.