/MEAformer

[Paper][ACM MM 2023] MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid

Primary LanguagePythonMIT LicenseMIT

πŸ–οΈ MEAformer

license arxiv badge Pytorch ACMMM

This paper introduces MEAformer, a multi-modal entity alignment transformer approach for meta modality hybrid, which dynamically predicts the mutual correlation coefficients among modalities for more fine-grained entity-level modality fusion and alignment.

πŸ”” News

πŸ”¬ Dependencies

pip install -r requirement.txt

Details

  • Python (>= 3.7)
  • PyTorch (>= 1.6.0)
  • numpy (>= 1.19.2)
  • Transformers (>= 4.21.3)
  • easydict (>= 1.10)
  • unidecode (>= 1.3.6)
  • tensorboard (>= 2.11.0)

πŸš€ Train

  • Quick start: Using script file (run.sh)
>> cd MEAformer
>> bash run.sh
  • Optional: Using the bash command
>> cd MEAformer
# -----------------------
# ---- non-iterative ----
# -----------------------
# ----  w/o surface  ---- 
# FBDB15K
>> bash run_meaformer.sh 1 FBDB15K norm 0.8 0 
>> bash run_meaformer.sh 1 FBDB15K norm 0.5 0 
>> bash run_meaformer.sh 1 FBDB15K norm 0.2 0 
# FBYG15K
>> bash run_meaformer.sh 1 FBYG15K norm 0.8 0 
>> bash run_meaformer.sh 1 FBYG15K norm 0.5 0 
>> bash run_meaformer.sh 1 FBYG15K norm 0.2 0 
# DBP15K
>> bash run_meaformer.sh 1 DBP15K zh_en 0.3 0 
>> bash run_meaformer.sh 1 DBP15K ja_en 0.3 0 
>> bash run_meaformer.sh 1 DBP15K fr_en 0.3 0
# ----  w/ surface  ---- 
# DBP15K
>> bash run_meaformer.sh 1 DBP15K zh_en 0.3 1 
>> bash run_meaformer.sh 1 DBP15K ja_en 0.3 1 
>> bash run_meaformer.sh 1 DBP15K fr_en 0.3 1
# -----------------------
# ------ iterative ------
# -----------------------
# ----  w/o surface  ---- 
# FBDB15K
>> bash run_meaformer_il.sh 1 FBDB15K norm 0.8 0 
>> bash run_meaformer_il.sh 1 FBDB15K norm 0.5 0 
>> bash run_meaformer_il.sh 1 FBDB15K norm 0.2 0 
# FBYG15K
>> bash run_meaformer_il.sh 1 FBYG15K norm 0.8 0 
>> bash run_meaformer_il.sh 1 FBYG15K norm 0.5 0 
>> bash run_meaformer_il.sh 1 FBYG15K norm 0.2 0 
# DBP15K
>> bash run_meaformer_il.sh 1 DBP15K zh_en 0.3 0 
>> bash run_meaformer_il.sh 1 DBP15K ja_en 0.3 0 
>> bash run_meaformer_il.sh 1 DBP15K fr_en 0.3 0
# ----  w/ surface  ---- 
# DBP15K
>> bash run_meaformer_il.sh 1 DBP15K zh_en 0.3 1 
>> bash run_meaformer_il.sh 1 DBP15K ja_en 0.3 1 
>> bash run_meaformer_il.sh 1 DBP15K fr_en 0.3 1

❗Tips: you can open the run_meaformer.sh or run_meaformer_il.sh file for parameter or training target modification.

🎯 Results

$\bf{H@1}$ Performance with the Settings: w/o surface & Non-iterative in UMAEA. We modified part of the MSNEA to involve not using the content of attribute values but only the attribute types themselves (See issues for details):

Method $\bf{DBP15K_{ZH-EN}}$ $\bf{DBP15K_{JA-EN}}$ $\bf{DBP15K_{FR-EN}}$
MSNEA .609 .541 .557
EVA .683 .669 .686
MCLEA .726 .719 .719
MEAformer .772 .764 .771
UMAEA .800 .801 .818

πŸ“š Dataset

❗NOTE: Download from GoogleDrive (1.26G) and unzip it to make those files satisfy the following file hierarchy:

ROOT
β”œβ”€β”€ data
β”‚   └── mmkg
└── code
    └── MEAformer

Code Path

πŸ‘ˆ πŸ”Ž Click
MEAformer
β”œβ”€β”€ config.py
β”œβ”€β”€ main.py
β”œβ”€β”€ requirement.txt
β”œβ”€β”€ run_meaformer.sh
β”œβ”€β”€ run_meaformer_il.sh
β”œβ”€β”€ run.sh
β”œβ”€β”€ model
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ layers.py
β”‚   β”œβ”€β”€ MEAformer_loss.py
β”‚   β”œβ”€β”€ MEAformer.py
β”‚   β”œβ”€β”€ MEAformer_tools.py
β”‚   └── Tool_model.py
β”œβ”€β”€ src
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ distributed_utils.py
β”‚   β”œβ”€β”€ data.py
β”‚   └── utils.py
└── torchlight
    β”œβ”€β”€ __init__.py
    β”œβ”€β”€ logger.py
    β”œβ”€β”€ metric.py
    └── utils.py

Data Path

πŸ‘ˆ πŸ”Ž Click
mmkg
β”œβ”€β”€ DBP15K
β”‚   β”œβ”€β”€ fr_en
β”‚   β”‚   β”œβ”€β”€ ent_ids_1
β”‚   β”‚   β”œβ”€β”€ ent_ids_2
β”‚   β”‚   β”œβ”€β”€ ill_ent_ids
β”‚   β”‚   β”œβ”€β”€ training_attrs_1
β”‚   β”‚   β”œβ”€β”€ training_attrs_2
β”‚   β”‚   β”œβ”€β”€ triples_1
β”‚   β”‚   └── triples_2
β”‚   β”œβ”€β”€ ja_en
β”‚   β”‚   β”œβ”€β”€ ent_ids_1
β”‚   β”‚   β”œβ”€β”€ ent_ids_2
β”‚   β”‚   β”œβ”€β”€ ill_ent_ids
β”‚   β”‚   β”œβ”€β”€ training_attrs_1
β”‚   β”‚   β”œβ”€β”€ training_attrs_2
β”‚   β”‚   β”œβ”€β”€ triples_1
β”‚   β”‚   └── triples_2
β”‚   β”œβ”€β”€ translated_ent_name
β”‚   β”‚   β”œβ”€β”€ dbp_fr_en.json
β”‚   β”‚   β”œβ”€β”€ dbp_ja_en.json
β”‚   β”‚   └── dbp_zh_en.json
β”‚   └── zh_en
β”‚       β”œβ”€β”€ ent_ids_1
β”‚       β”œβ”€β”€ ent_ids_2
β”‚       β”œβ”€β”€ ill_ent_ids
β”‚       β”œβ”€β”€ training_attrs_1
β”‚       β”œβ”€β”€ training_attrs_2
β”‚       β”œβ”€β”€ triples_1
β”‚       └── triples_2
β”œβ”€β”€ FBDB15K
β”‚   └── norm
β”‚       β”œβ”€β”€ ent_ids_1
β”‚       β”œβ”€β”€ ent_ids_2
β”‚       β”œβ”€β”€ ill_ent_ids
β”‚       β”œβ”€β”€ training_attrs_1
β”‚       β”œβ”€β”€ training_attrs_2
β”‚       β”œβ”€β”€ triples_1
β”‚       └── triples_2
β”œβ”€β”€ FBYG15K
β”‚   └── norm
β”‚       β”œβ”€β”€ ent_ids_1
β”‚       β”œβ”€β”€ ent_ids_2
β”‚       β”œβ”€β”€ ill_ent_ids
β”‚       β”œβ”€β”€ training_attrs_1
β”‚       β”œβ”€β”€ training_attrs_2
β”‚       β”œβ”€β”€ triples_1
β”‚       └── triples_2
β”œβ”€β”€ embedding
β”‚   └── glove.6B.300d.txt
β”œβ”€β”€ pkls
β”‚   β”œβ”€β”€ dbpedia_wikidata_15k_dense_GA_id_img_feature_dict.pkl
β”‚   β”œβ”€β”€ dbpedia_wikidata_15k_norm_GA_id_img_feature_dict.pkl
β”‚   β”œβ”€β”€ FBDB15K_id_img_feature_dict.pkl
β”‚   β”œβ”€β”€ FBYG15K_id_img_feature_dict.pkl
β”‚   β”œβ”€β”€ fr_en_GA_id_img_feature_dict.pkl
β”‚   β”œβ”€β”€ ja_en_GA_id_img_feature_dict.pkl
β”‚   └── zh_en_GA_id_img_feature_dict.pkl
β”œβ”€β”€ MEAformer
└── dump

🀝 Cite:

Please condiser citing this paper if you use the code or data from our work. Thanks a lot :)

@inproceedings{chen2023meaformer,
  author    = {Zhuo Chen and
               Jiaoyan Chen and
               Wen Zhang and
               Lingbing Guo and
               Yin Fang and
               Yufeng Huang and
               Yichi Zhang and
               Yuxia Geng and
               Jeff Z. Pan and
               Wenting Song and
               Huajun Chen},
  title     = {MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid},
  booktitle    = {{ACM} Multimedia},
  publisher    = {{ACM}},
  year         = {2023}
}

πŸ’‘ Acknowledgement

We appreciate MCLEA, MSNEA, EVA, MMEA and many other related works for their open-source contributions.

Flag Counter