Multimodal Machine Learning Group (MMLG)
If you are interested in Multimodal, please don't hesitate to contact me! Looking forward to your join!
Earth
Pinned Repositories
awesome-multimodal-knowledge-graph
A curated list of awesome papers, datasets and tutorials within Multimodal Knowledge Graph.
iPerceive
Applying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering | Python3 | PyTorch | CNNs | Causality | Reasoning | LSTMs | Transformers | Multi-Head Self Attention
Multimodal-Emotion-Recognition
A real time Multimodal Emotion Recognition web app for text, sound and video inputs
multimodal-ml-reading-list
Reading list for research topics in multimodal machine learning
MultimodalNMT
nmtpytorch
Sequence-to-Sequence Framework in PyTorch
pythia
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
Video-guided-Machine-Translation
Starter code for the VMT task and challenge
vilbert_beta
visualbert
Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"
Multimodal Machine Learning Group (MMLG)'s Repositories
multimodal-machine-learning/multimodal-ml-reading-list
Reading list for research topics in multimodal machine learning
multimodal-machine-learning/Multimodal-Emotion-Recognition
A real time Multimodal Emotion Recognition web app for text, sound and video inputs
multimodal-machine-learning/awesome-multimodal-knowledge-graph
A curated list of awesome papers, datasets and tutorials within Multimodal Knowledge Graph.
multimodal-machine-learning/iPerceive
Applying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering | Python3 | PyTorch | CNNs | Causality | Reasoning | LSTMs | Transformers | Multi-Head Self Attention
multimodal-machine-learning/MultimodalNMT
multimodal-machine-learning/contextual-multimodal-fusion
Contextual inter modal attention for multimodal sentiment analysis
multimodal-machine-learning/MTN
Code for the paper Multimodal Transformer Networks for End-to-End Video-Grounded Dialogue Systems (ACL19)
multimodal-machine-learning/multimodal-sentiment-analysis
Attention-based multimodal fusion for sentiment analysis
multimodal-machine-learning/nmtpytorch
Sequence-to-Sequence Framework in PyTorch
multimodal-machine-learning/pythia
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
multimodal-machine-learning/Video-guided-Machine-Translation
Starter code for the VMT task and challenge
multimodal-machine-learning/vilbert_beta
multimodal-machine-learning/visualbert
Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"
multimodal-machine-learning/awesome-multimodal-machine-translation
A curated list of awesome papers, datasets and tutorials within Multimodal Machine Learning.
multimodal-machine-learning/CVSE
The official source code for the paper Consensus-Aware Visual-Semantic Embedding for Image-Text Matching (ECCV 2020)
multimodal-machine-learning/Grounding_in_Dialogue
ACL 2020 Tutorial by Malihe Alikhani and Matthew Stone
multimodal-machine-learning/how2-dataset
This repository contains code and metadata of How2 dataset
multimodal-machine-learning/lxmert
PyTorch code for EMNLP 2019 paper "LXMERT: Learning Cross-Modality Encoder Representations from Transformers".
multimodal-machine-learning/Multimodal-Transformer
[ACL'19] [PyTorch] Multimodal Transformer