People of ACM - Ayanna Howard
June 22, 2021
How did you first become interested in robotics and AI?
I first became interested in robotics as a young, impressionable, middle school girl. My motivation was the television series called The Bionic Woman—my goal in life, at that time, was to gain the skills necessary to build the bionic woman. As a teenager, I didn’t realize you couldn’t actually do that at the time, but figured that I had to acquire combined skillsets in engineering and computer science in order to accomplish that goal. I became fascinated with AI after my junior year in college, when I was required to design my first neural network during my third NASA summer internship in 1992. I quickly saw that if I could combine the power of AI with robotics, I could enable the ambitious dreams of my youth.
When I first started my college journey, robotics degrees didn’t exist as such. For a 16-year-old high school senior first applying to college and trying to decide on a major, it seemed that if you wanted to do robotics, you went into engineering. And, even though I’d declared my undergraduate major as computer engineering because of that, by the time I matriculated into graduate school, robotics had started to evolve. The faculty that taught my undergraduate robotics and computer vision courses were electrical/computer engineering professors, but in graduate school, computer science had started to “claim” robotics as a subfield.
The human side of my research interest came about when I began taking courses such as case-based reasoning and fuzzy logic in graduate school. These early AI courses introduced me to the concept of how to design algorithms that could encode data that was primarily coming from human experts—humans who didn’t necessarily know how to jot down their expertise in the coding language of machines.
What prompted you to found Zyrobotics? How do you see the field of assistive robotics developing in the near future?
I did not see myself as a startup founder. As a graduate student, I had developed and sold two different artificial intelligence software applications to established companies, but I was not the one pushing the product into the market. Years later, as a faculty member at Georgia Tech, I participated in the National Science Foundation’s Innovation Corps (I-Corps) program and was bitten by the entrepreneurship bug. Zyrobotics was first founded to commercialize assistive technology products for children with special needs based on technology licensed from my lab at Georgia Tech. It since expanded into developing STEM educational products for children with diverse learning needs and now functions as a nonprofit organization. I’m most proud of this achievement because it allowed me to combine all of the hard-knock lessons I’d learned in designing artificial intelligence algorithms, adaptive user interfaces, and human-robot interaction schemes with a real-world application that had large societal impact—that of engaging children of diverse abilities in improving their developmental and educational outcomes.
When I first started working in assistive robotics, or more specifically in pediatric robotics, there were not a lot of robotics researchers working in this domain. Over the last 15 years, though, there has been an increase in the number of researchers that have started to focus on the infusion of robots into the clinical and home setting to address the needs of individuals with disabilities. I’m hopeful that this trend will continue to increase and we will see more commercialized platforms made available within the community.
In one of your more cited papers, “Overtrust of Robots in Emergency Evacuation Scenarios,” you and your co-authors conducted an experiment that demonstrated that participants followed the evacuation instructions of a robot─even though half of the participants observed the same robot performing poorly in a navigation guidance task just minutes before. As AI systems become increasingly prevalent in our daily lives, how can we mitigate our inclination to overtrust these systems?
Based on ongoing research in my lab, we’ve continued to validate this human propensity for overtrust of AI. The emergency evacuation overtrust study was one of the earliest ones from the group to validate this phenomenon that emerged when humans interact with robotic systems that require them to make a quick decision under pressure. Since then, my group has made a number of other interesting findings, such as: 1) humans are more likely to rely on the AI if they need to make decisions concerning individuals that belong to a different demographic group (which links to our studies examining AI bias); 2) providing humans with more autonomy when asking for assistance further increases trust; and 3) a person’s positive or negative initial interactions with AI has a direct correlation to their ongoing trust in the system, irrespective of whether the AI is performing poorly or not. My research group has also continued to examine ways to mitigate this inclination to overtrust these systems. Although still preliminary, we’ve found that if we actively design distrust into the system, we can make it more safe. One methodology we’re exploring is tangentially linked to the field of explainable AI, where the system provides an explanation with respect to some of its risks or uncertainties. Given that all of these systems have uncertainty, we’ve begun designing our AI system to provide information concerning its uncertainty in a way that the human can understand, both at an emotional and cerebral level, in order to change their trust behavior.
You were one of the first researchers to uncover inherent biases in algorithms. In designing AI systems, are there certain first steps researchers and practitioners in industry can take to ensure fairer algorithms?
I’m actually happy to say that my group published one of the earliest papers on recognizing inherent biases in algorithms. In fact, one of our papers on the biases found in emotion recognition algorithms was first published in 2017 and was titled “Addressing Bias in Machine Learning Algorithms: A Pilot Study on Emotion Recognition for Intelligent Systems,” where we examined the biases found in a commercial cloud-based emotion recognition system and proposed a solution that built upon it for improving the classification rate for the minority group while maintaining equivalent classification rates for the majority group. Since then, my research group has both uncovered biases in AI algorithms but also proposed solutions to mitigate those biases. Some of the solutions that can be taken to ensure fairer algorithms are societal and some are technical.
I believe that, as a field, we still need to tackle the general problem of underrepresentation in AI. As a community, we need to recognize that differences matter and that team diversity for designing solutions for a diverse world population is not just a nice-to-have, but a requisite. On the technical side, we need to design a form of AI accountability within our algorithms. We tend to be so focused on metrics such as accuracy, precision, recall, etc. that we disregard (or devalue) other metrics for quantifying algorithmic fairness, such as measuring disparate impact. As a community, we need to expand our metrics of performance to include fairness as a required metric rather than viewing it as a secondary post-processing criterion.
What advice would you offer a student who may have an interest in robotics or computing, but may be discouraged because they don’t see anyone like them represented in their classes or in the field?
Sometimes when you’re the “one and only” in a space, you might also be the first from your home community to have successfully navigated into the robotics, AI, or computer science world. This means that you might not have a long list of individuals that you feel comfortable reaching out to in order to help you navigate your career or to pump you up when you feel discouraged. Sometimes being the only one in a room that has your lived experience means having to fight the feeling that you might not even deserve to be in the room. The first piece of advice, therefore, is: work on getting a mentor and/or identify a supportive ally. And, given that you might be a one and only, the mentor you seek may not look like you. The second piece of advice: embrace and celebrate your differences; time and time again, evidence has shown that when a diverse team of individuals is brought together to solve a problem, whether it’s in the form of a board of directors, an engineering team, or a leadership team, the resulting outcomes happen to be better.
Ayanna Howard is Dean of the College of Engineering at The Ohio State University. Howard has authored 250 publications in refereed journals and conferences, including serving as co-editor/co-author of more than a dozen books and/or book chapters. Her interests include human-robot interaction, human-robot trust, rehabilitation robotics and broadening participation in the field. Prior to her current role, she was Chair of Georgia Institute of Technology’s School of Interactive Computing.
She is also the founder of Zyrobotics, a company that develops mobile therapy and educational products for children with special needs. She has created and led numerous programs designed to engage, recruit, and retain students and faculty from groups that are historically underrepresented in computing. These efforts include National Science Foundation (NSF)-funded broadening participation in computing initiatives.
Among her many honors, Howard received the Computer Research Association’s A. Nico Habermann Award and the Richard A. Tapia Achievement Award. She was recently selected as the 2021-2022 ACM Athena Lecturer for fundamental contributions to the development of accessible human-robotic systems and artificial intelligence, along with forging new paths to broaden participation in computing through entrepreneurial and mentoring efforts.