People of ACM - Luiz André Barroso
August 11, 2020
How did you first become interested in computer architecture?
After I was an undergraduate in Electrical Engineering, I fell in love with queueing theory while working with Daniel Menascé at Pontifical Catholic University of Rio de Janeiro. I did my Master’s thesis in that topic, developing models and simulators for multiprocessor interconnection networks. I realized I was fascinated with understanding and optimizing complex systems, and computer architecture gives you the permission to do that for a living.
What factors converged in the early 2000s that necessitated computer architects to fundamentally rethink datacenter design?
Mainly two things. First, by 2003 the minimum size of a single Google serving cluster was becoming larger than the full size of the co-location facilities we used to host our equipment. It was clear we would need to begin building datacenters of our own. At that point it became evident how much room for improvement there was in that area. As a third-party hosting business, datacenters were designed by groups of disjoint engineering crafts that knew little of each other’s disciplines. Civil engineers built the building, mechanical engineers provisioned cooling, electrical engineers distributed power, hardware designers built servers, software engineers wrote internet services. None of those folks ever really cared to talk to one another very much. In a way it was not a surprise that the end product was generally quite poor.
Second, once you realize that internet services like Google Search and Gmail don’t run in a few racks of machines, but run on a whole building’s worth of equipment, you might as well think of the datacenter as the computer you are designing—all of it.
How did it then evolve from an internal project to something that had broader impact in the industry?
We began publishing research papers about our design philosophy. The motivation here was largely from a sustainability standpoint. People used to think that it would be prohibitively expensive to design datacenters that were very energy-efficient. We knew that was not the case, and we wanted to describe our designs as evidence of that. Our research publications and eventually the textbook were the means by which we got the message out.
One of your more recent projects combines localization and artificial intelligence to improve Google Maps. When someone is on foot and using Google Maps on their smartphone, Google Maps may give them a command to “turn north,” but the pedestrian may not know which direction north is. Will you tell us how Google Maps’ Live View feature aims to solve this challenge?
I managed a really bold team of engineers, product specialists and designers in Google Maps who came up with this idea that we could use our own very well localized StreetView images to significantly improve estimates of your phone's location as well as which direction you are facing. Turns out that the GPS and compass on a cell phone are too inaccurate for the kind of walking navigation use case you mention. The idea here was to use the phone’s camera and capture features of the scene in front of you and compare it with the StreetView images we had for that same place, effectively aligning them. I confess I wasn’t really confident it was going to work, but I gave the team a chance to prove me wrong and they did.
In a recent article for Communications of the ACM, ACM A.M. Turing Award recipients John Hennessy and David Patterson predicted another golden age for the field of computer architecture in the next decade. Do you agree? What are some interesting developments on the horizon for computer architecture?
I wholeheartedly agree. In many ways the end of Dennard scaling (the technical phenomenon behind the more popular Moore’s Law) shines a light on the work of computer architects, since exponential improvements at the circuit level are no longer in our roadmap. A given circuit technology can now last several years, which makes it a bit easier to reuse designs (you don’t need to be porting them to the next circuit technology every year) so that we have a chance in hardware to use some of the efficiencies that have been available in software engineering processes. I’m fascinated to see what new ideas could be cast into specialized hardware over the next few years beyond the already successful neural network accelerators such as Google’s Tensor Processing Unit (TPU).
Luiz André Barroso is Vice President of Engineering at Google in the Core organization, the team primarily responsible for the technical foundation behind Google’s flagship products. Previously, he was the technical lead for Google’s computing infrastructure, and served as VP of Engineering for Google Maps. His research interests include hardware design (from the chip to the datacenter level), high performance input/output, and web services infrastructure.
Barroso is widely recognized as the foremost architect of the design of the new ultra-scale datacenters. He has authored numerous influential publications on topics including warehouse-scale computing, hyperscale system architecture, and energy efficiency. His book, The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines, (co-authored with Urs Hölzle and Parthasarathy Ranganathan) is widely accepted as the authoritative textbook in the field. Barroso was named the recipient of the 2020 ACM-IEEE CS Eckert-Mauchly Award for pioneering the design of warehouse-scale computing and driving it from concept to industry.