A core part of the Clubhouse experience is the hallway, where you can meet friends, drop in on interesting discussions and speak your mind on everything from the latest in fashion to the Hegelian dialectic. We've been working on personalized recommendations for live rooms in the hallway to help folks find relevant rooms seamlessly and make Clubhouse a fun place for everyone.
Early on, we started out by ranking rooms in the hallway with simple heuristics we felt would best match you to live rooms. We picked and recommended rooms based on how many of your friends were in them or how closely they matched your selected topics. Since then, we've moved towards using a machine learning model, which can make more nuanced predictions on ranking rooms in the hallway and results in better overall quality.
Our Gradient Boosted Decision Tree (GBDT) ranking model is trained on hundreds of features that quantify various attributes of your activity in past rooms. For instance, the model looks at whether you spend more time in small private vs. large public rooms or whether you speak often or prefer to listen, and uses these to rank rooms optimally for you. The model also looks at features of the room itself, like its duration and the number of participants, as well as features of the club if it's a club room. The ranking model also accepts user and club embedding vectors as inputs. These embeddings are trained on user-club interaction data to build dense representations of users and clubs.
The model is trained as a classifier, with the target being room joins / non-joins from past room impressions. The classifier score for each room, a value between 0 and 1, is used to determine the relative ordering of the room in the hallway.
Our model training and inference system looks like a traditional ranking system in a lot of ways but has some key differences that arise from the fact that we are ranking live rooms. Adapting quickly to changes in live rooms and low prediction latency is of paramount importance to keep the hallway feeling fresh and interesting.
Complexities of ranking live rooms
Capturing changes to rooms in real time
Many of the features used by the ranking model are slow-moving batch features, which is to say that they are computed relatively infrequently, maybe once every few hours or once a day. But live rooms on Clubhouse are ever-evolving; a room might be winding down with people saying their goodbyes and heading out. Or a room might be blowing up because Oprah just dropped in. Slow-moving features don't capture these changes, so we include fast-moving, streaming features into the model to keep it abreast of the latest changes to rooms when ranking them. We've built a simple but powerful framework to compute streaming features from individual events that fire every time there is a change to a room. The framework lets us aggregate events into interesting features like the rates of change in room sizes, the total number of rooms you've seen in the last 10 minutes, or the number of links that have been pinned in a room. These fast features — as we call them — are logged at inference time, so that we can use historical values to train future iterations of the model.
High-speed model inference
Every time you pull to refresh the hallway, our servers fetch and run model inference on hundreds of features across hundreds of rooms and send the results back to the iOS and Android apps. This can be a prohibitively resource-intensive and slow process, so we’ve taken some steps to make sure users get the most snappy experience possible. We use a simple memory-backed feature storage mechanism, to make sure fetching model features does not take too long. We’ve also spun up a lightweight, stateless microservice responsible solely for model inference. Our server fetches features and ships them to this service, and receives the model scores to be used for ranking. This helps us isolate resource-intensive model inference from the core server and scale it independently.
We're always experimenting with ways to make ranking even better. Incorporating user and channel embeddings and audio transcripts, training larger models, and improving our machine learning infrastructure are some of the exciting next steps we're thinking about. Thanks for reading, and we hope to see you around the hallway!
If you enjoyed reading this article, please join us on Clubhouse on Wednesday, May 11 at 2:00 PM PT for a live conversation about how we enhanced our hallways using machine learning. RSVP here today! P.S. If you’re someone that enjoys solving tough problems, come join our team by checking out our job openings today!
— Akshaan Kakar (Software Engineer, Machine Learning & Discovery)
This post is part of our engineering blog series, Technically Speaking. If you liked this post and want to read more, click here.