/neural_graph_collaborative_filtering

Neural Graph Collaborative Filtering, SIGIR2019

Primary LanguagePythonMIT LicenseMIT

Neural Graph Collaborative Filtering

This is our Tensorflow implementation for the paper:

Xiang Wang, Xiangnan He, Meng Wang, Fuli Feng, and Tat-Seng Chua (2019). Neural Graph Collaborative Filtering, Paper in ACM DL or Paper in arXiv. In SIGIR'19, Paris, France, July 21-25, 2019.

Author: Dr. Xiang Wang (xiangwang at u.nus.edu)

Introduction

Neural Graph Collaborative Filtering (NGCF) is a new recommendation framework based on graph neural network, explicitly encoding the collaborative signal in the form of high-order connectivities in user-item bipartite graph by performing embedding propagation.

Citation

If you want to use our codes and datasets in your research, please cite:

@inproceedings{NGCF19,
  author    = {Xiang Wang and
               Xiangnan He and
               Meng Wang and
               Fuli Feng and
               Tat{-}Seng Chua},
  title     = {Neural Graph Collaborative Filtering},
  booktitle = {Proceedings of the 42nd International {ACM} {SIGIR} Conference on
               Research and Development in Information Retrieval, {SIGIR} 2019, Paris,
               France, July 21-25, 2019.},
  pages     = {165--174},
  year      = {2019},
}

Environment Requirement

The code has been tested running under Python 3.6.5. The required packages are as follows:

  • tensorflow == 1.8.0
  • numpy == 1.14.3
  • scipy == 1.1.0
  • sklearn == 0.19.1

Example to Run the Codes

The instruction of commands has been clearly stated in the codes (see the parser function in NGCF/utility/parser.py).

  • Gowalla dataset
python NGCF.py --dataset gowalla --regs [1e-5] --embed_size 64 --layer_size [64,64,64] --lr 0.0001 --save_flag 1 --pretrain 0 --batch_size 1024 --epoch 400 --verbose 1 --node_dropout [0.1] --mess_dropout [0.1,0.1,0.1]
  • Amazon-book dataset
python NGCF.py --dataset amazon-book --regs [1e-5] --embed_size 64 --layer_size [64,64,64] --lr 0.0005 --save_flag 1 --pretrain 0 --batch_size 1024 --epoch 200 --verbose 50 --node_dropout [0.1] --mess_dropout [0.1,0.1,0.1]

Some important arguments:

  • alg_type

  • adj_type

    • It specifies the type of laplacian matrix where each entry defines the decay factor between two connected nodes.
    • Here we provide four options:
      • ngcf (by default), where each decay factor between two connected nodes is set as 1(out degree of the node), while each node is also assigned with 1 for self-connections. Usage: --adj_type ngcf.
      • plain, where each decay factor between two connected nodes is set as 1. No self-connections are considered. Usage: --adj_type plain.
      • norm, where each decay factor bewteen two connected nodes is set as 1/(out degree of the node + self-conncetion). Usage: --adj_type norm.
      • gcmc, where each decay factor between two connected nodes is set as 1/(out degree of the node). No self-connections are considered. Usage: --adj_type gcmc.
  • node_dropout

    • It indicates the node dropout ratio, which randomly blocks a particular node and discard all its outgoing messages. Usage: --node_dropout [0.1] --node_dropout_flag 1
    • Note that the arguement node_dropout_flag also needs to be set as 1, since the node dropout could lead to higher computational cost compared to message dropout.
  • mess_dropout

    • It indicates the message dropout ratio, which randomly drops out the outgoing messages. Usage --mess_dropout [0.1,0.1,0.1].

Dataset

We provide two processed datasets: Gowalla and Amazon-book.

  • train.txt

    • Train file.
    • Each line is a user with her/his positive interactions with items: userID\t a list of itemID\n.
  • test.txt

    • Test file (positive instances).
    • Each line is a user with her/his positive interactions with items: userID\t a list of itemID\n.
    • Note that here we treat all unobserved interactions as the negative instances when reporting performance.
  • user_list.txt

    • User file.
    • Each line is a triplet (org_id, remap_id) for one user, where org_id and remap_id represent the ID of the user in the original and our datasets, respectively.
  • item_list.txt

    • Item file.
    • Each line is a triplet (org_id, remap_id) for one item, where org_id and remap_id represent the ID of the item in the original and our datasets, respectively.

Acknowledgement

This research is supported by the National Research Foundation, Singapore under its International Research Centres in Singapore Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.