For search engines and ads, optimization is everything. To achieve it, algorithms scan multiple data streams—user profiles, client ad budgets, and demographic targets—to decide which ads to place where. Heavy on math, modern advertising raises questions the hit show Mad Men never considered.
“Do you show the ad of the company willing to pay the most? Or the ad pitched toward a niche audience?” said Shipra Agrawal, an assistant professor of industrial engineering and operations research at Columbia Engineering and a member of the Data Science Institute. “How do you design an algorithm that learns through trial and error to maximize an ad seller’s profitability?”
Agrawal is an expert in optimization, the art of making decisions that maximize profits, or minimize waste, or some other objective. It turns out that the techniques to make online advertising profitable can also be applied to identifying promising drugs in clinical trials, minimizing energy waste, recommending movies, pricing goods and guiding robots, among other applications.
Agrawal also works on designing and organizing exchange-traded “prediction markets,” an alternative to opinion polls which encourage participants to bet on the outcome of elections or other events based on their true beliefs. If participants bet correctly they receive a reward. “The equilibrium prices of bets in a prediction market reveal the crowd’s belief about the likelihood of an event,” she said.
With a team at Microsoft, Agrawal developed a prediction market to forecast the results of India’s 2014 general election. With 5,000 participants, the experiment’s predictions were consistent with most exit polls. She is currently analyzing the data to understand which factors improve prediction accuracy, especially in a diverse democracy like India.
Agrawal joined the Engineering School in July 2015 from Microsoft Research in Bangalore, where she worked in the company’s machine learning and optimization group.
At Microsoft, she came up with an explanation for how a popular algorithm developed in the 1930s, Thompson sampling, works. Fed multiple streams of real-time data, the algorithm picks a combination of values that deliver the highest average, solving the so-called multi-armed bandit problem. Just as gamblers who pull an optimal combination of slot levers get the highest payout, ad sellers can maximize profits by matching the right mix of consumers with suggested web pages and ads. In a 2012 study, Agrawal and her colleague showed that Thompson sampling improves its decision-making ability through trial-and-error, all while making the least possible errors.
In an extension of this work known as online programming, Agrawal is developing algorithms that consider multiple long-term constraints, including uncertainty, when making sequential decisions. In a dynamic pricing system, for example, the demand for goods and their price may fluctuate but sales are ultimately limited by supply. In a 2014 study in Operations Research, Agrawal and colleagues developed a learning-based algorithm that adjusts to a set of prices that are steadily updated. In addition to dynamic pricing, the research has application for routing data across networks, ad word matching, and inventory management.
For her PhD thesis at Stanford, Agrawal looked at decision making under uncertainty, when multiple uncertain parameters are involved. In a 2012 study in Operations Research, she developed a technique for measuring the extent to which decisions made with incomplete information are sub-optimal, especially if some parameters are dependent.
BE, M.B.M. Engineering College (2002); ME, Indian Institute of Science (2004); PhD, Stanford University (2011)