/RecQ

RecQ: A Python Framework for Recommender Systems

Primary LanguagePythonGNU General Public License v3.0GPL-3.0

RecQ

Founder: @Coder-Yu
Main Contributors: @DouTong @Niki666 @HuXiLiFeng @BigPowerZ
Released by School of Software Engineering, Chongqing University

Introduction

RecQ is a Python library for recommender systems (Python 2.7.x). It implements a suit of state-of-the-art recommendations. To run RecQ easily (no need to setup packages used in RecQ one by one), the leading open data science platform Anaconda is strongly recommended. It integrates Python interpreter, common scientific computing libraries (such as Numpy, Pandas, and Matplotlib), and package manager, all of them make it a perfect tool for data science researcher.

Architecture of RecQ

RecQ Architecture

To design it exquisitely, we refer to the library LibRec, which is implemented with Java.

Features

  • Cross-platform: as a Python software, RecQ can be easily deployed and executed in any platforms, including MS Windows, Linux and Mac OS.
  • Fast execution: RecQ is based on the fast scientific computing libraries such as Numpy and some light common data structures, which make it run much faster than other libraries based on Python.
  • Easy configuration: RecQ configs recommenders using a configuration file.
  • Easy expansion: RecQ provides a set of well-designed recommendation interfaces by which new algorithms can be easily implemented.
  • Data visualization: RecQ can help visualize the input dataset without running any algorithm.

Visualization

How to Run it

  • 1.Configure the **xx.conf** file in the directory named config. (xx is the name of the algorithm you want to run)
  • 2.Run the **main.py** in the project, and then input following the prompt.

How to Configure it

Essential Options

Entry Example Description
ratings D:/MovieLens/100K.txt Set the path to input dataset. Format: each row separated by empty, tab or comma symbol.
social D:/MovieLens/trusts.txt Set the path to input social dataset. Format: each row separated by empty, tab or comma symbol.
ratings.setup -columns 0 1 2 -columns: (user, item, rating) columns of rating data are used; -header: to skip the first head line when reading data
social.setup -columns 0 1 2 -columns: (trustor, trustee, weight) columns of social data are used; -header: to skip the first head line when reading data
recommender UserKNN/ItemKNN/SlopeOne/etc. Set the recommender to use.
evaluation.setup -testSet ../dataset/testset.txt Main option: -testSet, -ap, -cv
-testSet path/to/test/file (need to specify the test set manually)
-ap ratio (ap means that the ratings are automatically partitioned into training set and test set, the number is the ratio of test set. e.g. -ap 0.2)
-cv k (-cv means cross validation, k is the number of the fold. e.g. -cv 5)
Secondary option:-b, -p, -cold
    -b val (binarizing the rating values. Ratings equal or greater than val will be changed into 1, and ratings lower than val will be changed into 0. e.g. -b 3.0)
-p (if this option is added, the cross validation wll be excuted parallelly, otherwise excuted one by one)
-cold threshold (evaluation on cold-start users, users in training set with ratings more than threshold will be removed from the test set)
item.ranking off -topN -1 Main option: whether to do item ranking
-topN N: the length of the recommendation list for item recommendation, default -1 for full list;
output.setup on -dir ./Results/ Main option: whether to output recommendation results
-dir path: the directory path of output results.

Memory-based Options

similarity pcc/cos Set the similarity method to use. Options: PCC, COS;
num.shrinkage 25 Set the shrinkage parameter to devalue similarity value. -1: to disable simialrity shrinkage.
num.neighbors 30 Set the number of neighbors used for KNN-based algorithms such as UserKNN, ItemKNN.

Model-based Options

num.factors 5/10/20/number Set the number of latent factors
num.max.iter 100/200/number Set the maximum number of iterations for iterative recommendation algorithms.
learnRate -init 0.01 -max 1 -init initial learning rate for iterative recommendation algorithms;
-max: maximum learning rate (default 1);
reg.lambda -u 0.05 -i 0.05 -b 0.1 -s 0.1 -u: user regularizaiton; -i: item regularization; -b: bias regularizaiton; -s: social regularization

How to extend it

  • 1.Make your new algorithm generalize the proper base class.
  • 2.Rewrite some of the following functions as needed.
          - readConfiguration()
          - printAlgorConfig()
          - initModel()
          - buildModel()
          - saveModel()
          - loadModel()
          - predict()

Algorithms Implemented

Note: We use SGD to obtain the local minimum. So, there have some differences between the original papers and the code in terms of fomula presentation. If you have problems in understanding the code, please open an issue to ask for help. We can guarantee that all the implementations are carefully reviewed and tested.

       
Rating prediction Paper
SlopeOne Lemire and Maclachlan, Slope One Predictors for Online Rating-Based Collaborative Filtering, SDM 2005.
PMF Salakhutdinov and Mnih, Probabilistic Matrix Factorization, NIPS 2008.
SoRec Ma et al., SoRec: Social Recommendation Using Probabilistic Matrix Factorization, SIGIR 2008.
SocialMF Jamali and Ester, A Matrix Factorization Technique with Trust Propagation for Recommendation in Social Networks, RecSys 2010.
RSTE Ma et al., Learning to Recommend with Social Trust Ensemble, SIGIR 2009.
SVD Y. Koren, Collaborative Filtering with Temporal Dynamics, SIGKDD 2009.
SVD++ Koren, Factorization meets the neighborhood: a multifaceted collaborative filtering model, SIGKDD 2008.
SoReg Ma et al., Recommender systems with social regularization, WSDM 2011.
EE Khoshneshin et al., Collaborative Filtering via Euclidean Embedding, RecSys2010.
CoFactor Liang et al., Factorization Meets the Item Embedding: Regularizing Matrix Factorization with Item Co-occurrence, RecSys2016.
SREELi et al., Social Recommendation Using Euclidean embedding, IJCNN 2017.
CUNE-MFZhang et al., Collaborative User Network Embedding for Social Recommender Systems, SDM 2017.

   
Item Ranking Paper
BPR Rendle et al., BPR: Bayesian Personalized Ranking from Implicit Feedback, UAI 2009.
SBPR Zhao et al., Leveraing Social Connections to Improve Personalized Ranking for Collaborative Filtering, CIKM 2014
CUNE-BPRZhang et al., Collaborative User Network Embedding for Social Recommender Systems, SDM 2017.

Related Datasets

   
Data Set Basic Meta User Context
Users ItemsRatings (Scale) Density Users Links (Type)
Ciao [1] 7,375 105,114 284,086 [1, 5] 0.0365% 7,375 111,781 Trust
Epinions [2] 40,163 139,738 664,824 [1, 5] 0.0118% 49,289 487,183 Trust
Douban [3] 2,848 39,586 894,887 [1, 5] 0.794% 2,848 35,770 Trust
LastFM [4] 1,892 17,632 92,834 implicit 0.27% 1,892 25,434 Trust

Reference

[1]. Tang, J., Gao, H., Liu, H.: mtrust:discerning multi-faceted trust in a connected world. In: International Conference on Web Search and Web Data Mining, WSDM 2012, Seattle, Wa, Usa, February. pp. 93–102 (2012)

[2]. Massa, P., Avesani, P.: Trust-aware recommender systems. In: Proceedings of the 2007 ACM conference on Recommender systems. pp. 17–24. ACM (2007)

[3]. G. Zhao, X. Qian, and X. Xie, “User-service rating prediction by exploring social users’ rating behaviors,” IEEE Transactions on Multimedia, vol. 18, no. 3, pp. 496–506, 2016.

[4] Iván Cantador, Peter Brusilovsky, and Tsvi Kuflik. 2011. 2nd Workshop on Information Heterogeneity and Fusion in Recom- mender Systems (HetRec 2011). In Proceedings of the 5th ACM conference on Recommender systems (RecSys 2011). ACM, New York, NY, USA