Pinned Repositories
AVS_dual_encoding_attention_network
[ACM ICMR 2020] Attention Mechanisms, Signal Encodings and Fusion Strategies for Improved Ad-hoc Video Search with Dual Encoding Networks
fractional_step_discriminant_pruning_dcnn
In this work, a novel pruning framework is introduced to compress noisy or less discriminant filters in small fractional steps, in deep convolutional networks. The proposed framework utilizes a class-separability criterion that can exploit effectively the labeling information in annotated training sets. Additionally, an asymptotic schedule for the pruning rate and scaling factor is adopted so that the selected filters’ weights collapse gradually to zero, providing improved robustness. Experimental results on the CIFAR-10, Google speech commands (GSC) and ImageNet32 (a downsampled version of ILSVRC-2012) show the efficacy of the proposed approach.
fully_convolutional_networks
Implementation of various fully convolutional networks in Keras
Gated-ViGAT
Gated-ViGAT. Code and data for our paper: N. Gkalelis, D. Daskalakis, V. Mezaris, "Gated-ViGAT: Efficient bottom-up event recognition and explanation using a new frame selection policy and gating mechanism", IEEE International Symposium on Multimedia (ISM), Naples, Italy, Dec. 2022.
lecture_video_fragmentation
Lecture video segmentation dataset
ObjectGraphs
This repository hosts the code and data for our paper "ObjectGraphs: Using Objects and a Graph Convolutional Network for the Bottom-up Recognition and Explanation of Events in Video", Proc. 2nd Int. Workshop on Large Scale Holistic Video Understanding (HVU) at the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), June 2021.
RetargetVid
Video dataset and code for transforming a video's aspect ratio, from our papers "A fast smart-cropping method and dataset for video retargeting", IEEE ICIP 2021, and "A Web Service for Video Smart-Cropping", IEEE ISM 2021.
TAME
Code and data for our learning-based eXplainable AI (XAI) method TAME: M. Ntrougkas, N. Gkalelis, V. Mezaris, "TAME: Attention Mechanism Based Feature Fusion for Generating Explanation Maps of Convolutional Neural Networks", Proc. IEEE Int. Symposium on Multimedia (ISM), Naples, Italy, Dec. 2022.
TextToVideoRetrieval-TtimesV
A PyTorch Implementation of the T x V model from "Are all combinations equal? Combining textual and visual features with multiple space learning for text-based video retrieval", Proc. ECCVW 2022.
ViGAT
This repository hosts the scripts and some of the pre-trained models presented in out paper "ViGAT: Bottom-up event recognition and explanation in video using factorized graph attention network", IEEE Access, 2022.
bmezaris's Repositories
bmezaris/RetargetVid
Video dataset and code for transforming a video's aspect ratio, from our papers "A fast smart-cropping method and dataset for video retargeting", IEEE ICIP 2021, and "A Web Service for Video Smart-Cropping", IEEE ISM 2021.
bmezaris/lecture_video_fragmentation
Lecture video segmentation dataset
bmezaris/ObjectGraphs
This repository hosts the code and data for our paper "ObjectGraphs: Using Objects and a Graph Convolutional Network for the Bottom-up Recognition and Explanation of Events in Video", Proc. 2nd Int. Workshop on Large Scale Holistic Video Understanding (HVU) at the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), June 2021.
bmezaris/fully_convolutional_networks
Implementation of various fully convolutional networks in Keras
bmezaris/TAME
Code and data for our learning-based eXplainable AI (XAI) method TAME: M. Ntrougkas, N. Gkalelis, V. Mezaris, "TAME: Attention Mechanism Based Feature Fusion for Generating Explanation Maps of Convolutional Neural Networks", Proc. IEEE Int. Symposium on Multimedia (ISM), Naples, Italy, Dec. 2022.
bmezaris/AVS_dual_encoding_attention_network
[ACM ICMR 2020] Attention Mechanisms, Signal Encodings and Fusion Strategies for Improved Ad-hoc Video Search with Dual Encoding Networks
bmezaris/TextToVideoRetrieval-TtimesV
A PyTorch Implementation of the T x V model from "Are all combinations equal? Combining textual and visual features with multiple space learning for text-based video retrieval", Proc. ECCVW 2022.
bmezaris/fractional_step_discriminant_pruning_dcnn
In this work, a novel pruning framework is introduced to compress noisy or less discriminant filters in small fractional steps, in deep convolutional networks. The proposed framework utilizes a class-separability criterion that can exploit effectively the labeling information in annotated training sets. Additionally, an asymptotic schedule for the pruning rate and scaling factor is adopted so that the selected filters’ weights collapse gradually to zero, providing improved robustness. Experimental results on the CIFAR-10, Google speech commands (GSC) and ImageNet32 (a downsampled version of ILSVRC-2012) show the efficacy of the proposed approach.
bmezaris/Gated-ViGAT
Gated-ViGAT. Code and data for our paper: N. Gkalelis, D. Daskalakis, V. Mezaris, "Gated-ViGAT: Efficient bottom-up event recognition and explanation using a new frame selection policy and gating mechanism", IEEE International Symposium on Multimedia (ISM), Naples, Italy, Dec. 2022.
bmezaris/ViGAT
This repository hosts the scripts and some of the pre-trained models presented in out paper "ViGAT: Bottom-up event recognition and explanation in video using factorized graph attention network", IEEE Access, 2022.
bmezaris/L-CAM
Code for our paper "Learning Visual Explanations for DCNN-Based Image Classifiers Using an Attention Mechanism", by I. Gkartzonika, N. Gkalelis, V. Mezaris, presented and included in the Proceedings of the ECCV 2022 Workshop on Vision with Biased or Scarce Data (VBSD), Oct. 2022.
bmezaris/lstm_structured_pruning_geometric_median
Structured Pruning of LSTMs via Eigenanalysis and Geometric Median. This code can be used for generating more compact LSTMs, which is very useful for mobile multimedia applications and deep learning applications in other resource-constrained environments.
bmezaris/subclass_deep_neural_networks
Subclass deep neural networks
bmezaris/masked-ViGAT
Code and materials for our paper: D. Daskalakis, N. Gkalelis, V. Mezaris, "Masked Feature Modelling for the unsupervised pre-training of a Graph Attention Network block for bottom-up video event recognition", Proc. 25th IEEE Int. Symp. on Multimedia (ISM 2023), Laguna Hills, CA, USA, Dec. 2023.