People of ACM - Lauren Wilcox
April 18, 2024
What does your present role at eBay involve?
My work at eBay combines organizational and technical leadership within the company, so my team balances governance and implementation of ethical and responsible AI practices and fundamental research and development. We advance AI safety and responsibility directly as part of our AI/ML model development and evaluation (e.g., shaping a lot of the work that goes into human alignment and managing the creation and application of robust datasets, benchmarks, and metrics, and overseeing evaluations to en
sure our operational safety and effectiveness) and evaluations of AI/ML applications. We also advise on technical opportunities for responsible AI/ML applications, and manage governance processes company-wide to ensure our AI technologies meet responsible AI standards and principles. This involves crafting and enforcing our corporate policies on responsible AI, defining and managing clear roles and responsibilities within the organization, enabling reviews of upcoming AI applications to maintain ethical integrity, and managing strategic aspects of our AI roadmap. In summary, we make sure that our AI is both technically sound and “societally robust”: ethically acceptable, socially beneficial, and resilient in the face of societal challenges and changes.As an inaugural member of ACM’s Future of Computing Academy, you co-authored an ACM FCA blog post titled, “It's Time to Do Something: Mitigating the Negative Impacts of Computing Through a Change to the Peer Review Process.” What was the key argument of this piece?
Summarizing our original post, we argued that to foster movement toward positive, sustainable societal outcomes, research in computing that touches areas such as health, human labor and economics, learning, society and political life, private life, and the environment should be evaluated through deeper engagement with other relevant scientific disciplines. The status quo at the time (2018) was to motivate our work in computing solely with anticipated positive impacts and present it as such. We argued that it was no longer acceptable to position our research exclusively as having a net positive impact on the world.
Our post argued that computing researchers can play a much more fundamental role in identifying and articulating the potential negative impacts of their computing innovations (e.g., looking to research by social scientists, public health experts, critical theorists and more), and that peer review could be an important arena for facilitating this engagement. I am glad to see that broader impact statements are encouraged as part of original research submissions at several conferences (NeurIPS, FAccT, etc).
What is an important area at the intersection of AI and society that hasn’t received enough attention?
There are a lot of exciting ideas to pursue to advance the types of interactions that people can have with data, models and applications of AI/ML, e.g., in addition to shaping human touchpoints with datasets and model evaluations, we can surface controls and techniques to allow people to adjust inputs and choices at model inference time, steer the experience in accordance with their preferences and values, support accessibility, shape specific content style choices, and more.
More broadly in model development and evaluation, we are seeing more attempts to incorporate human values, norms, and pluralistic perspectives into AI/ML design and development. Some of this work focuses on eliciting and representing people’s preferences, planning community-based evaluations, and supporting deliberation for open questions of governance. We need to engage multi-cultural and impacted communities in fundamental design processes for problem formulation, data collection, model training, and other upstream design decisions and governance. It will be vital to look beyond consensus as an achievable end state---and consider how we might situate other critical elements to democratic processes and participation in AI development, with culturally-situated and community-based evaluations, enabling these on multiple levels of governance.
Lauren Wilcox is a Senior Director and Distinguished Applied Scientist at eBay, where she leads eBay’s Office of Responsible AI. Wilcox has held research and organizational leadership roles in both industry and academia. At Google Research, she was a senior staff research scientist and group manager of the Technology, AI, Society & Culture (TASC) team, advancing work to understand and shape the sociotechnical factors that surround machine learning development and the effects AI/ML has on impacted communities. She was also a research lead aligning health AI advancements with communities' needs, and co-founder of Google's Health Equity program to bring equity, safety, and transparency to the forefront of technology development and deployment. Wilcox held an associate faculty position at Georgia Tech's School of Interactive Computing, where she currently holds an adjunct position. Before earning her PhD, Lauren was a staff software engineer at IBM and during her PhD, was a student researcher at IBM Watson.
Wilcox contributes to ACM conferences, serving as a Subcommittee Chair for ACM CHI 2022, and an Associate Chair for ACM FAccT 2024, which will be held in Rio de Janeiro from June 4th–6th. She was recently named an ACM Distinguished Member for contributions to research in responsible AI, human–computer interaction, and health informatics.