The workshop on Personalization, Recommendation and Search (PRS) aims at bringing together practitioners and researchers in these three domains. The goal of this workshop is to facilitate the sharing of information and practices, as well as finding bridges between these communities and promoting discussion.Â
Please register in advance through the RSVP button above. We'll close registrations on June 1st or when we reach capacity.
Â
Don't hesitate to contact us for transportation information:
prs2018@netflix.com
The workshop on Personalization, Recommendation and Search (PRS) aims at bringing together practitioners and researchers in these three domains. The goal of this workshop is to facilitate the sharing of information and practices, as well as finding bridges between these communities and promoting discussion.Â
Please register in advance through the RSVP button above. We'll close registrations on June 1st or when we reach capacity.
Â
Don't hesitate to contact us for transportation information:
prs2018@netflix.com
This @NetflixResearch workshop is organized by:
  Yves Raimond - yraimond[at]netflix.com
  Roelof van Zwol - roelofvanzwol[at]netflix.com
  Justin Basilico - jbasilico[at]netflix.com
  Tony Jebara - tjebara[at]netflix.com
  Aish Fenton - afenton[at]netflix.com
Pragmatic lessons learnt while teaching machines
Very human decisions are crucial in determining the outcomes of Machine Learning efforts. This talk will use examples from search, voice, and recommendation systems to illustrate what can get lost in translation between people, data and machines. Algorithmic bias is for example gaining attention as a pressing issue - and rightly so. However, standard processes and tools for practitioners are not readily available.
We share early lessons learnt in an approach to integrating understanding algorithmic bias into product team practices. We’ll discuss pragmatic questions to ask when building models that make predictions and learn about people’s interests. This includes data curation, selection of metrics targets, and different team roles to include to ensure that systems work for a diversity of people.
Henriette Cramer is a lab lead at Spotify. Her research focuses on voice interactions and the human side of machine learning. She is particularly interested in team decisions and data ecosystems that affect Machine Learning outcomes, the impact of data curation and design, and the (mis)match between human and machine models of the world around them. She enjoys pragmatic approaches to translating research insights into practice, and balancing direct product impact with more exploratory strategic research. Previously, Henriette worked at Yahoo on user engagement and quality of recommendations, search and chatbots. While at the Mobile Life and the Swedish Institute of Computer Science she led projects on human-robot interaction, and location-based interactions. Henriette holds a PhD from the University of Amsterdam, on people’s responses to autonomous systems.Â
 Latent Models, Shallow and Deep, for Recommender Systems (slides)
Â
In this talk, we will survey latent models, starting with shallow and progressing towards deep, as applied to personalization and recommendations. After providing an overview of the Netflix recommender system, we will discuss research at the intersection of deep learning, natural language processing and recommender systems and how they relate to traditional collaborative filtering techniques. We will present case studies in the space of deep latent variable models applied to recommender systems.Â
Anoop Deoras works with the core Recommender System group here at Netflix. He is passionate about deep learning and its application in solving challenging problems in the space of personalization and recommendations. He develops and experiments with several TensorFlow-backed deep learning models at Netflix. He completed his Ph.D. in the Electrical and Computer Engineering at Johns Hopkins University, working on solving intractable decoding problems in statistical speech recognition and machine translation using recurrent neural network language models.
Dawen Liang is a senior research scientist at Netflix, working on core discovery algorithms. His research interests include Bayesian modeling and approximate inference, as well as causal inference, and their applications to recommender systems. He completed his Ph.D. in the Electrical Engineering Department at Columbia University, working on probabilistic latent variable models for analyzing music, speech, text, and user behavior data.
Improving the quality of top-N recommendation: modern approaches with a special focus on similar users' behavior (slides)
The quality of top-N recommendations is crucial, as it determines the usefulness of the recommendation system to the users. So, how do we decide which products should be recommended? How do we address the limitations of current approaches, in order to achieve better quality? A common assumption of modern top-N recommenders is that users base their behavior on a set of preferences, shared by all. In this talk, I will describe a new user model, which proposes that a user determines her preferences based on some global aspects shared by all, and on some more specific aspects, shared only by users that are similar to her. The value of this user model will be illustrated through novel methods which follow it, and which are developed both in the context of item-item and latent space approaches. Then, I will present insights from an ongoing effort on investigating what are the properties of the error of popular top-N recommendation methods and how they correlate with the top-N recommendation quality, which show that users (or items) with similar rating behaviors should have similar error in their missing entries for high quality recommendations.
Â
Evangelia Christakopoulou is an applied researcher in the area of recommender systems. She received her Ph.D. in Computer Science from the University of Minnesota in 2018. She has multiple publications in top data mining and recommender systems conferences, including one award-winning publication. She has also been serving on the program committee of top-tier data mining and information retrieval conferences.
Beyond Being Accurate: Toward More Inclusive and Fairer Models using Focused Learning and Adversarial Training
One big challenge of machine learning research is devising approaches to learn more inclusive and fairer models. For example, when building a recommender system, how can we ensure that every user and item are actually modeled well? Typically, recommender systems are built, optimized, and tuned to improve a single global prediction objective. However, as we will show, recommenders often leave many items or users badly-modeled and thus under-served. As a result, we ask the following question: how can we improve models for a specified subset of items or users? In this talk, we will discuss recent research at Google toward more inclusiveness and fairer models using two techniques [1] [2]: First, we offer a new technique called "focused learning" that is based on hyperparameter optimization and a modified optimization objective. We demonstrate prediction improvements on multiple datasets. For instance, on MovieLens we achieve as much as a 17% improvement for niche movies, cold-start items, and even the most badly-modeled items in the original model.
Second, using adversarial training in a multi-task network, we remove information about a sensitive subgroup or an attribute from the latent representation learned by the neural network. In particular, we study how the choice of data for the adversarial training impacts the resulting model. We find two interesting results---encouragingly, only a small amount of data is needed to train these adversarial models; and a balanced distribution of examples significantly enables more inclusive and fairer models.
Â
[1] Alex Beutel, Ed H. Chi, Zhiyuan Cheng, Hubert Pham, John Anderson. Beyond Globally Optimal: Focused Learning for Improved Recommendations. WWW, 2017.
[2] Alex Beutel, Jilin Chen, Zhe Zhao, Ed H. Chi. Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations. KDD FATML Workshop, 2017.
Â
Ed H. Chi is a Principal Scientist at Google, leading a machine learning research team focused on recommendation systems and social computing research. He has launched significant improvements of recommenders for YouTube, Google Play Store and Google+. With 39 patents and over 110 research articles, he is known for research on Web and online social systems, and the effects of social signals on user behavior. Prior to Google, he was the Area Manager and a Principal Scientist at Palo Alto Research Center's Augmented Social Cognition Group, where he led the team in understanding how social systems help groups of people to remember, think and reason. Ed completed his three degrees (B.S., M.S., and Ph.D.) in 6.5 years from University of Minnesota, and has been doing research on software systems since 1993. Recognized as an ACM Distinguished Scientist and elected into the CHI Academy, he has been featured and quoted in the press, including the Economist, Time Magazine, LA Times, and the Associated Press, and has won awards for both teaching and research. In his spare time, Ed is an avid photographer and snowboarder, and has a blackbelt in Taekwondo.
Recommender Systems in an Real Time Bidding Platform
In a Real Time Bidding (RTB) ad platform, figuring out the right ad to show to an user needs to happen in <100ms. This involves deciding the advertiser & product combination that the user is most likely to interact with. Specifically, we need to recommend a set of products from a combined catalog of ~3B products for more than a billion users. In this talk, I will introduce the recommender system in our RTB platform, the constraints under which it operates and speak to some of the approaches we have experimented with. I will also present some of the challenges we have faced and highlight the research work in solving these problems.
Suju Rajan is the VP, Head of Research at Criteo. At Criteo, her team works on all aspects of performance driven computational advertising, including, real-time bidding, large-scale recommendation systems, auction theory, reinforcement learning, online experimentation, metrics and scalable optimization methods. Prior to Criteo, she was the Director of the Personalization Sciences at Yahoo Research where her team worked on personalized recommendations for several Yahoo products.
Multi-armed Bandit Approaches for recommendations at Netflix (slides)
In this talk, we will present a general multi-armed bandit framework for recommendations on the Netflix homepage. A key aspect of our framework is closed loop attribution to link how our members respond to a recommendation. We perform frequent updates of policies using user feedback collected from a past time interval window. Our framework is generic and allows for easily plugging-in different contextual bandit algorithms, reward exploration schemes and policies, etc. We will present  two example case studies using MABs at Netflix - a) Artwork Personalization to recommend personalized visuals for each of our members for the different titles and b) Billboard recommendation to recommend the right title to be watched on the Billboard.
 Jaya Kawale is a Senior Research Scientist at Netflix working on problems related to targeting and recommendations. She received her PhD in Computer Science from the University of Minnesota. Her main areas of interest are large scale machine learning and data mining.
Fernando Amat is a Senior Research Engineer at Netflix working on problems related to automated image selection and optimization. He received his PhD in Electrical Engineering from Stanford University. He is interested in developing and applying large-scale machine learning algorithms to solve problems in different domains, from recommender systems to biomedical imaging.
Personalized Concert Recommendations at Pandora
Since 2005, Pandora has been known for delivering personalized music recommendations, becoming a go-to source of music discovery for over 70 million users. With the development of its Artist Marketing Platform (AMP), Pandora also aims to be the go-to destination for artists to connect with their fans, whether it’s by speaking directly to Pandora users with AMPcast, using Pandora’s recommendation algorithms to promote their latest single, or notifying listeners of upcoming tour dates nearby. Given the wealth of data that Pandora has about users’ music tastes, we are particularly well-suited to notify users about when their favorite artists are on tour nearby. In this talk, I will detail how Pandora solved the cold-start problem for concert recommendations by making recommendations based on listening history. I will also discuss the methods used to predict interactions with future concert recommendations as well as the geo-targeting strategies employed to determine how far people are willing to travel to attend a concert.
Kristi Schneck is a Senior Scientist at Pandora, where she is leading several science initiatives on Pandora’s next-generation podcast recommendation system. She has driven the science work for a variety of applications, including concert recommendations and content management systems. Kristi holds a PhD in physics from Stanford University and dual bachelors degrees in physics and music from MIT.
Learning from logged bandit feedback (slides)
Â
Many of the most impactful applications of machine learning are not just about prediction, but are about putting learning systems in control of selecting the right action at the right time (e.g., search engines, recommender systems or automated trading platforms). These systems are both producers and users of data -- the logs of the selected actions and their outcomes (e.g., derived from clicks, ratings or revenue) can provide valuable training data for learning the next generation of the system, giving rise to some of the biggest datasets we have collected. Machine learning in these settings is challenging since the system in operation biases the log data through the actions it selects and outcomes remain unknown for the actions not taken. Learning methods must, hence, reason about how changes to the system will affect future outcomes. We will summarize recent advances in these counterfactual learning techniques, and demonstrate how deep neural networks can be trained in these settings (ICLR’18).

Adith Swaminathan is a researcher in the Reinforcement Learning group at Microsoft Research AI (Redmond). He studies techniques for counterfactual reasoning in learning systems -- applications include off-policy reinforcement learning, contextual bandits and understanding the biases that confound user interaction data. He completed his PhD at Cornell University (2017) and received a BTech from IIT Bombay (2010).
The workshop on Personalization, Recommendation and Search (PRS) aims at bringing together practitioners and researchers in these three domains. The goal of this workshop is to facilitate the sharing of information and practices, as well as finding bridges between these communities and promoting discussion.Â
Please register in advance through the RSVP button above. We'll close registrations on June 1st.
Â
Don't hesitate to contact us for transportation information: