Founder: @Coder-Yu
More implementations of generic recommenders can be found in RecQ
Yue is a Python library for Music Recommendation (Python 2.7.x). It implements a suit of state-of-the-art music recommenders. To run Yue easily (no need to setup dendencies used in RecQ one by one), the leading open data science platform Anaconda is strongly recommended. It integrates Python interpreter, common scientific computing libraries (such as Numpy, Pandas, and Matplotlib), and package manager, all of them make it a perfect tool for data science researcher.
- Cross-platform: as a Python software, Yue can be easily deployed and executed in any platforms, including MS Windows, Linux and Mac OS.
- Fast execution: Yue is based on the fast scientific computing libraries such as Numpy and some light common data structures, which make it run much faster than other libraries based on Python.
- Easy configuration: Yue configs recommenders using a configuration file.
- Easy expansion: Yue provides a set of well-designed recommendation interfaces by which new algorithms can be easily implemented.
- 1.Configure the **xx.conf** file in the directory named config. (xx is the name of the algorithm you want to run)
- 2.Run the **main.py** in the project, and then input following the prompt.
Entry | Example | Description |
---|---|---|
record | D:/xiami/100K.txt | Set the path to input dataset. |
record.setup | -columns user:0,track:1,artist:2,album:3 -delim , | -columns: this option specifies what colums in the dataset mean. Four types of entities supported. If some types of information are missing, just skip the corresponding type; -delim: this option specifies which symbol separates the columns. |
recommender | UserKNN/ItemKNN/MostPop/etc. | the name of the recommender |
evaluation.setup | -testSet ../dataset/testset.txt | Main option: -testSet, -ap, -cv -byTime -testSet path/to/test/file (need to specify the test set manually) -ap ratio (ap means that the ratings are automatically partitioned into training set and test set, the number is the ratio of test set. e.g. -ap 0.2) -cv k (-cv means cross validation, k is the number of the fold. e.g. -cv 5) -byTime ratio (sort the user record in order of the time. ratio decides the percentage of test set(recently played). Secondary option:-b, -p, -cold -target track (This option decides which type of object will be recommended (artist, track, album). Only available for some general recommenders like MostPop) -b val (binarizing the rating values. Ratings equal or greater than val will be changed into 1, and ratings lower than val will be changed into 0. e.g. -b 3.0) -p (if this option is added, the cross validation wll be excuted parallelly, otherwise excuted one by one) -cold threshold (evaluation on cold-start users, users in training set with ratings more than threshold will be removed from the test set) |
item.ranking | off -topN 5,10,20 | -topN N1,N2,N3...: the length of the recommendation list. *Yue can generate multiple evaluation results for different N at the same time |
output.setup | on -dir ./Results/ | Main option: whether to output recommendation results -dir path: the directory path of output results. |
num.factors | 5/10/20/number | Set the number of latent factors |
num.max.iter | 100/200/number | Set the maximum number of iterations for iterative recommendation algorithms. |
learnRate | -init 0.01 -max 1 | -init initial learning rate for iterative recommendation algorithms; -max: maximum learning rate (default 1); |
reg.lambda | -u 0.05 -i 0.05 -b 0.1 | -u: user regularizaiton; -i: item regularization; -b: bias regularizaiton; |
- 1.Make your new algorithm generalize the proper base class.
- 2.Rewrite some of the following functions as needed.
- printAlgorConfig()
- initModel()
- buildModel()
- saveModel()
- loadModel()
- predict()
Note: We use SGD to obtain the local minimum. So, there have some differences between the original papers and the code in terms of fomula presentation. If you have problems in understanding the code, please open an issue to ask for help. We can guarantee that all the implementations are carefully reviewed and tested.
Item Ranking | Paper |
---|---|
Rand | Recommend tracks, artists or albums randomly |
MostPop | Recommend most popular tracks, artists or albums |
BPR | Rendle et al., BPR: Bayesian Personalized Ranking from Implicit Feedback, UAI 2009. |
MEM(implementing...) | Wang et al., Learning music embedding with metadata for context aware recommendation, ICMR 2016. |
FISM | Kabbur et al., FISM: Factored Item Similarity Models for Top-N Recommender Systems, KDD 2013. |
IPF (*Removed, not suitable for the music recommendation) | Xiang et al., Temporal Recommendation on Graphs via Long- and Short-term Preference Fusion, KDD 2010. |
WRMF | Hu et al., Collaborative Filtering for Implicit Feedback Datasets, KDD 2009. |
Data Set | Basic Meta | Context | ||||||
---|---|---|---|---|---|---|---|---|
Users | Tracks | Artists | Albums | Record | Tag | User Profile | Artist Profile | |
NowPlaying [1] | 1,744 | 16,864 | 2,108 | N/A | 1,117,335 | N/A | N/A | N/A |
Xiami [2] | 4,270 | 177,289 | 25,844 | 68,479 | 1,337,948 | N/A | N/A | N/A |
[1]. Eva Zangerle, Martin Pichl, Wolfgang Gassler, and Günther Specht. 2014. #nowplaying Music Dataset: Extracting Listening Behavior from Twitter. In Proceedings of the First International Workshop on Internet-Scale Multimedia Management (WISMM '14). ACM, New York, NY, USA, 21-26
[2]. Wang, Dongjing, et al. "Learning music embedding with metadata for context aware recommendation." Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval. ACM, 2016.