People of ACM - Michael Wooldridge
October 6, 2020
Will you tell us what multi-agent systems are and why they are important in advancing AI?
The idea of software agents emerged in the early 1990s. The basic idea was to have software systems that are not simply the dumb recipients of our instructions, dutifully doing what they are told but otherwise remaining passive, but rather were active assistants, cooperating with us and working on our behalf in much the same way that a human assistant would. This vision of software agents took some time to make its way to reality, but eventually it did, and most of us now carry around a manifestation of that dream in the form of personal assistants on our smartphones, like Siri, Alexa, and Cortana.
While software agents are now an everyday reality, there is one piece of the software agent dream that did not yet turn out the way many of us expected. We imagined that the software agents working on our behalf would be directly interacting with each other. If I want a meeting with you, why should my Siri call you personally? Why doesn’t my Siri contact your Siri directly? That is the dream of multi-agent systems: having AI systems that can interact with each other in pursuit of their owner’s goals, employing social skills such as cooperation, coordination, and negotiation while doing so. I remain convinced that this dream must be a central part of the future of AI.
Your recent books Artificial Intelligence: Everything you need to know about the coming AI, and The Road to Conscious Machines are written as introductions to AI for general audiences. You have argued that popular misconceptions about AI are taking hold and have distracted the public from the enormous potential of the field. Will you discuss why it is important that we reframe the AI narrative?
The advances we’ve witnessed in AI this century are real and tremendously exciting: systems like usable automated translation tools, while not perfect, are successfully used by millions of people every day. Such tools still seemed very remote at the turn of the century. Unfortunately, it is natural for someone outside the field, hearing about “superhuman” ability in domains such as playing Go, to conclude that we must be on the edge of some kind of machine intelligence explosion—after all, if machines can play games like Go and chess better than any human, then surely this must mean “true” AI (the “Hollywood dream”) must be very close. The popular narrative around AI is often fixated on this idea, and AI is often painfully overhyped in the press and on social media as a consequence. I’m concerned about this for several reasons.
First, I think it paints a wildly unrealistic picture of where AI really is. Anybody expecting robot butlers in the near future, for example, is going to be sorely disappointed. In the past, hype about AI has been very damaging for the field: I therefore wanted to paint a realistic picture of what AI is, and what is possible.
Second, the Hollywood dream of AI distracts us from the issues surrounding AI that we should be concerned about now: issues such as algorithmic bias; issues such as the creation and dissemination of fake news; issues such as alienation in labor, where AI systems are used to monitor and direct an employee’s every action; and many more. So, my book tries to reframe the AI narrative, away from Hollywood dreams (and nightmares), and more around these issues.
You were the principal investigator for the Reasoning about Computational Economies (RACE) project, which received a five-year grant from the European Research Council (ERC). What were the key goals of this project?
The RACE project was centered around understanding multi-agent systems from the perspective of game theory—the theory of strategic reasoning. If my software agent is trying to do the best for me, and your software agent is trying to do the best for you, then when they interact with one another, this interaction is strategic, and game theory can help us to analyze it. In the RACE project, we developed the theory, algorithms, and software to enable us to automate the game theoretic analysis of multi-agent systems.
Why did you and your fellow organizers believe it was important to launch the ACM ICAIF conference to explore the intersection of AI and finance?
Finance is a natural and important application area for AI, in part because it is a data-rich domain. And what contemporary AI needs, more than anything, is data to feed the machine learning algorithms behind the current AI boom.
My personal interest in AI in finance derives from the fact that the world’s trading markets are nowadays driven by software agents: so-called high-frequency trading systems, which buy and sell without human intervention. But there are many situations in which high-frequency trading systems have been shown to have unpredictable and undesirable dynamics—the “Flash Crash” of May 6, 2010 is a famous example, in which automated trading markets lost billions of dollars in just a few minutes. If computer programs are going to trade autonomously on our behalf, then we need to be able to understand and manage their dynamics, and understand how to safely intervene when necessary. A lot of my current research is around these issues.
In a recent talk, “We Are NOT the Architect of Our Own Demise,” you predicted that autonomous vehicles will be widespread within the next few decades. What challenges will we need to surmount in multi-agent systems, or AI more broadly, to make autonomous vehicles part of everyday life?
Driverless car technology is tantalizingly close, but “level 5” autonomy—vehicles without a steering wheel, where human occupants of the vehicle never intervene—are still a way off. There seem to be two issues. The first is that while we can use AI to train vehicles to deal with most eventualities, there is a very long tail of possible circumstances that a vehicle might find itself in, which we just can’t anticipate. A human driver would have their common sense and experience of the world to call upon when encountering these: a driverless car doesn’t. The second is that driverless cars have to operate on roads with cars driven by people. That is not easy for the car (humans are often unpredictable and may not obey the rules of the road), and also for human drivers. These human elements are hard to deal with.
My guess is we will start to see driverless technology rolled out in an increasing number of “constrained” areas—ports, military installations, factories, and perhaps see low-speed autonomous taxis in certain urban areas, perhaps running standard routes (airport to city center, for example). At the same time, AI powered cruise control systems like Tesla’s AutoPilot will get better and better. I’m optimistic that we’ll get to level 4 autonomy relatively soon, but level 5 is a much bigger challenge.
Michael Wooldridge is a Professor and Head of the Department of Computer Science at the University of Oxford, and a Program Director for AI at the Alan Turing Institute in London. His research focuses primarily on multi-agent systems, and his work draws on ideas from logic, computational complexity, game theory, and agent-based modeling. He has authored more than 400 scientific publications, as well as two popular science introductions to AI—Artificial Intelligence: Everything you need to know about the coming AI. A Ladybird Expert Book (2018) and The Road to Conscious Machines (2020)—and seven more technical books.
Wooldridge has been active in service to the field, including being elected President of the International Joint Conference on Artificial Intelligence (IJCAI) in 2015, President of the European Association for AI (EurAI) in 2014, and President of the International Foundation for Autonomous Agents and Multi-Agent Systems (IFAAMAS) in 2007. He currently serves on the steering committee for the inaugural ACM International Conference on AI and Finance (ICAIF), which will be held on October 14 to 16. Among his many honors, Wooldridge received the ACM/SIGAI Autonomous Agents Research Award (2006) and was named an ACM Fellow (2015) for contributions to multi-agent systems and the formalization of rational action in multi-agent environments.