Saturday 11:45–12:30 in Tower Suite 2

Embeddings! Embeddings everywhere! - How to build a recommender system using representation learning

Maciej Arciuch, Karol Grzegorczyk

Audience level:
Intermediate

Description

Recommender systems are the major source of income of modern e-commerce. In this talk we will describe a large scale (over 90 millions items and 20 million registered users) e-commerce recommender system used at Allegro. The system is composed of two main parts: learning item representations and finding nearest neighbours. We will share the experience we gained from building the system.

Abstract

Do we need deep neural networks to create an efficient recommender system? In this talk we’ll show that even shallow networks can learn rich latent representations and become the core of a recommender system for a large-scale e-commerce platform. We’ll describe the idea behind our system and issues we faced at Allegro for over 20 million users and 90 million items.

In our approach we learn different representations of the same products. The choice of representation depends on the recommendation scenario: e.g. viewed-together collaborative-filtering (CF) is suitable for finding generic similarities between products, while bought-together CF performs well at finding complementary items. We also use content-based approaches to address the cold-start issue, both based on item descriptions and images. For collaborative-filtering we use skip-gram flavour of the word2vec algorithm and for text-based representation we use fastText.

Another important element of our system is the nearest-neighbours (NN) search module. In our scale we need to use approximate methods. NN search can be performed in both online and offline manner, we’ll discuss the pros and cons of both approaches. The core of the module is the Facebook Faiss library. We also evaluated Spotify Annoy, but decided to choose the former due to a larger variety of available algorithms, GPU acceleration and better overall results in our scenario.

Apart from high-level description, we’ll show some technical details, including how we migrated the solution from a Hadoop environment to Google Cloud Platform and how we leveraged Apache Beam (Google Dataflow) and Google Cloud ML Engine to obtain vector representations of products and how we orchestrated all the pieces using Spotify Luigi. We also managed to make our data pipelines usable in both production scenarios and in hyperparameter tuning jobs using Google Vizier, thus avoiding redundant code.

The audience will learn how an algorithms originating in NLP can be successfully applied to a totally different domain and how to transform a proof-of-concept into a production-ready e-commerce recommender system.

Subscribe to Receive PyData Updates

Subscribe