Push notifications serve as an essential gateway for Uber Eats customers to discover new restaurants, promotions, new offers for groceries and alcohol, and most importantly, the perks of being a valuable user of all the benefits. These notifications are sent from various Marketing, Product, and City Operations teams. The list of these notifications being sent grew very quickly, with a volume of billion notifications being sent per month by the end of 2020. This has also invited many issues like quality issues where notifications were sent outside of business hours, using duplicative links, invalid promo codes, and directing users to closed businesses within minutes and hours of one another, notifications were being delivered, many of which included inconsistent information. Also, users were receiving push notifications with little to no personalization consideration for the type, timing, or frequency of pushes they preferred to receive. So, the marketing team introduced more manual methods to control conflicting messages, adding up to 15 hours per week per team member and substituting less important orchestration activities for essential strategic work.
Uber has always strived to provide the best user experience and proposed on working for a comprehensive approach for these push notifications. A Consumer Communication Gateway (CCG) system was introduced, which centralized intelligence layer to control the user-level relevance, order, timing, and frequency of push notifications. This system stands in between the incoming notifications and the user’s device, and these incoming push notifications are buffered and saved in the user’s “inbox.” Our main motto is to work on the best schedule to send these pushes to the buffered users. In order to maximize some objective, the system is built to take into account possible combinations of pushes and delivery timings across a defined time horizon in the future, and with this arrangement, it can design schedules that deliver either zero or one of these pushes during the ensuing week.
The number of potential schedules has a factorial increase with N pushes and S slots. Therefore, it is impossible to take into account each schedule separately; hence, this problem is formulated as an Assignment Problem where each Push to time assignment has some value, and it considers the schedule that maximizes the sum of the scores from the assignments. With the use of an integer linear programme solver, this issue can be effectively resolved. It can also include business logic for pushes with linear constraints in the formulation of a linear programme like Push send window, Push expiration time, Minimum time difference between push notifications, Daily frequency cap, Restaurant open hours, etc. The optimization framework identifies the optimal pairs from the set of candidate push notifications and the set of possible delivery times, and many other constraints can also be encoded using linear inequality. Over more greedy methods, using a linear programme solver has some benefits. Even if other pushes in the inbox seem to be more important, it prioritizes sending out a push notification that is about to expire when it is detected. It can take advantage of the varying performance that pushes are anticipated to produce at various periods. It can be anticipated that Push A will perform well at lunch and dinner, but Push B will only perform well at lunch. In order to get the most value out of both, it can send B during lunch and A at dinner, and the most valuable pushes will be given a time for delivery when the inbox size goes over the frequency cap, and the less valuable ones will be dropped.
A machine learning model that predicts the likelihood that a user will place an order within 24 hours of receiving the Push determines the value of a (push, time) pair. Specifically speaking, an XGBoost model has been trained on the historical data for prediction. They downsample the negative class for model training due to the high-class imbalance in the dataset (a low fraction of pushes are connected with an order), and the least important features have been pruned to build the final model.
The system has been implemented with four components, each having its own distinct responsibilities at a higher level: The Persistor, The Schedule Generator, The Scheduler, and The Push Delivery.
The persistor serves as the system’s entry point and receives pushes meant to be delivered to users through gRPC. The push content is stored along with the metadata in the inbox in the form of an array in MySQL. The recipient’s user-UUID is used to divide the inbox table, enabling horizontal scaling with few hotspots and co-location of multiple pushes intended for the same user.
The Schedule Generator:
This Schedule Generator uses Uber ML’s platform paired with a linear program solver. Each time a push is saved into a user’s inbox, the schedule generator is triggered. Even if they were already scheduled, it fetches all pushes that were buffered for that user. This enables it to reschedule earlier pushes while taking into consideration the existence of the most recent Push.
For each push-time assignment, the Schedule Generator contacts the scheduler after it has discovered a schedule. A distributed cron-like system with a throughput of up to tens of thousands of triggers per second must be provided by the scheduler, and this is done using Cadence. Also, The Scheduler is idempotent, and the Push can be rescheduled if needed.
The Push Delivery:
The push delivery component is started when the scheduler determines that a planned push is prepared for sending. This part is in charge of doing several last-mile verifications, such as determining whether Uber currently has enough delivery drivers and also providing smoother retries and loads.
The results from initial experiments have been highly positive. Also, a reduction in opt-out and a substantial increase in the relevance of notification has been observed. Engineers have also been working on further boosting the impact of messaging intelligence. A few different ways can be by Improving the core models, Expanding across the channels, and Expanding across the platform.
Check out the reference article. All Credit For This Research Goes To Researchers on This Project. Also, don’t forget to join our Reddit page and discord channel, where we share the latest AI research news, cool AI projects, and more.
Avanthy Yeluri is a Dual Degree student at IIT Kharagpur. She has a strong interest in Data Science because of its numerous applications across a variety of industries, as well as its cutting-edge technological advancements and how they are employed in daily life.