Clarifications regarding Recommender training intervals
Closed this issue · 1 comments
I am interested in how the Recommender service works, and would like to extend it as part of my research (I met with members of the team in Umeå, Sweden this summer during ICAC/SASO, as you may recall -- hi!). As part of this, I would like to understand more about the intended behavior with regard to training the recommenders.
The documented default option for how often to perform re-training seems to be that continuous re-training is disabled, and that re-training at least in your performance tests should be triggered manually by clients instead (more predictable that way?).
- Is this interpretation correct?
- In your experience from this domain in general and of course TeaStore specifically, what would or should be a reasonable and realistic interval to perform continuous re-training?
Hi Lars,
yes, your interpretation is correct, that is the current behavior. For your second question: Re-training in the TeaStore is mainly intended to create interesting performance behavior. Therefore, if you want to apply continuous re-training, you would probably train more often than you would in industry.
For example, you want your re-training to occurr several times during the course of one experiment, hence useful values would be in the range of 10 - 60 minutes maybe. However, in industry, re-training would probably applied once, every night or even less frequently. (Probably as a background batch job during low load phases.)
Btw, if you have more questions regarding the TeaStore or its recommender, we can also have a small Skype chat?