Pinned Repositories
ActiveFER
Official implementation of the paper "Active Learning with Contrastive Pre-training for Facial Expression Recognition", accepted in ACII'23.
Attention-Mechanism-Implementation
Implementation of different attention mechanisms in TensorFlow and PyTorch.
awesome-contrastive-self-supervised-learning
A comprehensive list of awesome contrastive self-supervised learning papers.
Awesome-Vision-Transformer-Collection
Variants of Vision Transformer and its downstream tasks
Awesome-Visual-Transformer
Collect some papers about transformer with vision. Awesome Transformer with Computer Vision (CV)
byol-pytorch
Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch
CBAM
CBAM: Convolutional Block Attention Module for CIFAR100 on VGG19
CBAM.PyTorch
Non-official implement of Paper:CBAM: Convolutional Block Attention Module
cl-vs-mim
(ICLR 2023) Official PyTorch implementation of "What Do Self-Supervised Vision Transformers Learn?"
course_self_supervised_learning
SSL Udmey course resources
Wadha-Almattar's Repositories
Wadha-Almattar/ActiveFER
Official implementation of the paper "Active Learning with Contrastive Pre-training for Facial Expression Recognition", accepted in ACII'23.
Wadha-Almattar/Attention-Mechanism-Implementation
Implementation of different attention mechanisms in TensorFlow and PyTorch.
Wadha-Almattar/awesome-contrastive-self-supervised-learning
A comprehensive list of awesome contrastive self-supervised learning papers.
Wadha-Almattar/Awesome-Visual-Transformer
Collect some papers about transformer with vision. Awesome Transformer with Computer Vision (CV)
Wadha-Almattar/byol-pytorch
Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch
Wadha-Almattar/CBAM
CBAM: Convolutional Block Attention Module for CIFAR100 on VGG19
Wadha-Almattar/CBAM.PyTorch
Non-official implement of Paper:CBAM: Convolutional Block Attention Module
Wadha-Almattar/cl-vs-mim
(ICLR 2023) Official PyTorch implementation of "What Do Self-Supervised Vision Transformers Learn?"
Wadha-Almattar/course_self_supervised_learning
SSL Udmey course resources
Wadha-Almattar/conv_arithmetic
A technical report on convolution arithmetic in the context of deep learning
Wadha-Almattar/DeepRT
Wadha-Almattar/detr
End-to-End Object Detection with Transformers
Wadha-Almattar/dino
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
Wadha-Almattar/EfficientFormer
EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]
Wadha-Almattar/FasterViT
[ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention
Wadha-Almattar/Image-Text-Matching-Summary
Summary of Related Research on Image-Text Matching
Wadha-Almattar/LG_ITM
Official PyTorch implementation of the paper "Integrating Language Guidance into Image-Text Matching for Correcting False Negatives"
Wadha-Almattar/lightly
A python library for self-supervised learning on images.
Wadha-Almattar/machine-learning-book
Code Repository for Machine Learning with PyTorch and Scikit-Learn
Wadha-Almattar/MAE-Lite
Official implement for ICML2023 paper: "A Closer Look at Self-Supervised Lightweight Vision Transformers"
Wadha-Almattar/MLWritingAndResearch
Notebook Examples used in machine learning writing and research
Wadha-Almattar/pytorch-grad-cam
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Wadha-Almattar/PyTorch-Model-Compare
Compare neural networks by their feature similarity
Wadha-Almattar/self_supervised
Implementation of popular SOTA self-supervised learning algorithms as Fastai Callbacks.
Wadha-Almattar/SSiT
SSiT: Saliency-guided Self-supervised Image Transformer for Diabetic Retinopathy Grading
Wadha-Almattar/Transformer-Explainability
[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
Wadha-Almattar/Vicinity-Vision-Transformer
[TPAMI 2023] This is an official implementation for "Vicinity Vision Transformer".
Wadha-Almattar/VisionTransformer-MNIST
This notebook is designed to plot the attention maps of a vision transformer trained on MNIST digits.
Wadha-Almattar/website
Personal website with ML blog, project portfolio and more of my work
Wadha-Almattar/weighted-cross-entropy-loss