People of ACM - David Blei
September 9, 2014
David Blei is an associate professor at Princeton University. He has written extensively on topic modeling and his pursuit of new statistical tools for discovering and exploiting the hidden patterns that pervade modern, real-world data sets. He is joining Columbia University this fall as a Professor of Statistics and Computer Science, and will become a member of Columbia's Institute for Data Sciences and Engineering.
The recipient of the 2013 ACM- Infosys Foundation Award in the Computing Sciences, Blei received an NSF CAREER Award, an Alfred P. Sloan Fellowship, and the NSF Presidential Early Career Award for Scientists and Engineers. His recognitions also include the Office of Naval Research Young Investigator Award and the New York Academy of Sciences Blavatnik Award for Young Scientists. He earned a B.Sc. degree in Computer Science and Mathematics from Brown University, and a Ph.D. degree in Computer Science from the University of California, Berkeley.
How can your research in machine learning be applied to commercial applications?
Our research is easily applied to commercial applications. For example, recently, my group has been focusing on recommendation systems for scientific articles. These ideas can be adapted to recommending other types of items as well, especially those that are also associated with texts. More broadly, a number of companies have been using our methods, especially topic modeling, to solve a variety of problems. Examples that come to mind include risk prediction and search.
What is the most important recent innovation in machine learning?
One of the main recent innovations in ML research has been that we (the ML community) can now scale up our algorithms to massive data, and I think that this has fueled the modern renaissance of ML ideas in industry.
The main idea is called stochastic optimization, which is an adaptation of an old algorithm invented by statisticians in the 1950s.
In short, many machine learning problems can be boiled down to trying to find parameters that maximize (or minimize) a function. A common way to do this is "gradient ascent," iteratively following the steepest direction to climb a function to its top. This technique requires repeatedly calculating the steepest direction, and the problem is that this calculation can be expensive. Stochastic optimization lets us use cheaper approximate calculations. It has transformed modern machine learning.
What's your favorite aspect of your work?
I like working on interesting applied problems about data—in the sciences, social sciences, and humanities—and letting these real-world problems drive the methodological advances that we develop. This involves finding and collaborating with scientists from many disciplines, and learning something each time.
What's in the future for machine learning? What are some breakthroughs that you expect in the field?
I think that one of the next breakthroughs in modern machine learning and statistics will involve observational data. Previously, machine learning has focused on prediction; observational data has been difficult to understand and the activity of exploring data was a fuzzily defined activity. Now, data sets are much larger ("big data") and practitioners need to be able to quickly explore, understand, visualize, and summarize them. Revisiting observational data in the context of larger data sets (and the new pressing need) will lead to new methods and new theory around what we can say about observational data.
As a distinguished innovator of technology that is changing how we live and work, what advice would you give to young people considering careers in computing?
Computing is a wonderful field. There are important problems wherever you look, and many opportunities in technology, government, academia, and the sciences. My first piece of advice is to work on problems that you find interesting. (I also advise not to pay too much attention to advice, especially from "distinguished" people.)