I am working on a real-time recommender system predicting a product to a user using deep learning techniques (like wide & deep learning, deep & cross-network etc). Product catalogue can be huge (1000s to 1 million) and for a given user, the model needs to be evaluated against each product in real-time. As scalability is an important concern, is there any way to reduce the serving time complexity by tuning model architecture?
Asked
Active
Viewed 76 times
1 Answers
1
You write ... the model needs to be evaluated against each product in real-time., which gets me thinking that you use a binary classification (sigmoid in the final layer) architecture with negative sampling for the user/item interactions when training your model.
Have you considered using multi-class classification instead? Thus, for the user only predict once for the entire product catalogue, and selecting the top-k candidates from the softmax-layer. This way, you only need to feed-forward once through your neural net during inference.
Marcus
- 161
- 3