People of ACM - Virginia Dignum
November 19, 2024
How did your career path lead to work in responsible artificial intelligence?
My journey into AI began with a fascination for the ways technology can help solve complex problems and assist people in
their daily lives. I had already become interested in AI during my studies in applied mathematics in the 1980s at the University of Lisbon. My first job, just fresh out of university, included developing an AI system to support the design of social housing in Lisbon. Even in 1986, I realized that AI does not come without risks. Even if properly developed and tested, the system ended up suggesting that we assign a house to a family—only to discover that that house was already inhabited! The input data was not up to date, and it was a quick realization that systems are only as good as the data used!Before moving to academia, I worked in intelligent systems engineering and the development of AI and decision-making systems, which sparked my interest in the ethical and societal impacts of AI. As AI technologies became more prevalent and influential, I realized that their potential risks—such as bias, loss of privacy, and lack of transparency—needed to be addressed proactively. This led me to focus on responsible AI, which ensures that AI systems are developed and used in ways that are ethical, transparent, and beneficial to society. In fact, I believe that I was one of the first working on the topic and using the term “Responsible AI,” well before it became fashionable. Throughout my career, I’ve aimed to combine technical expertise with an understanding of the social and ethical implications of AI, which is why I now focus on questions pertaining to how we can balance innovation with responsibility.
In the recent article “HCI Sustaining the Rule of Law and Democracy: A European Perspective,” co-written with Mireille Hildebrandt, you advocated for the use of a social contract between an autonomous agent (an AI system pursuing its own goals) and its societal role (the objectives and norms under which it is intended to operate). Why would such contracts be beneficial?
The idea of using a social contract in the context of AI is rooted in the need to manage the autonomy of intelligent agents while ensuring they adhere to societal norms and values. It is grounded in most of my earlier research. Autonomous systems, especially AI, operate with increasing levels of independence and can make decisions that affect society in profound ways. A social contract provides a framework within which these systems can be held accountable for their actions, ensuring that they serve the public good while maintaining ethical boundaries.
In essence, a social contract establishes clear expectations about how an AI system should behave and interact with other systems as well as people, taking into account the goals guiding its deployment as well as the societal rules and norms that should guide its actions. For instance, in open, multi-agent systems where agents may have their own goals, the social contract helps ensure that while they pursue their objectives, they do so in a way that respects the rules of the broader system they are part of. This contract provides flexibility for the agent's autonomy but also embeds a mechanism for accountability if the agent deviates from expected behavior.
By embedding these principles into the design of AI systems, we can create a balance where AI's capabilities are harnessed to benefit society while limiting potential risks, such as unethical behavior or unintended consequences. The contract ensures transparency and makes it easier to track and explain the decisions made by AI systems, which is crucial for maintaining trust and supporting democratic processes. As such, the contract would act as a safeguard to prevent the misuse of AI and to ensure that its operations are always geared towards the public good, reinforcing the rule of law and democratic values. It also helps to clarify the expectations for both developers, deployers and users of AI, fostering a more responsible approach to its design and deployment.
Will you tell us a little the premise of your upcoming book, The AI Paradox?
The AI Paradox explores the inherent contradictions we face as artificial intelligence continues to advance. The paradox at the heart of the book is this: the more AI develops and takes over tasks previously reserved for human intelligence, the more it underscores the irreplaceable nature of human capabilities. AI can process vast amounts of data and recognize patterns far beyond human capacity, yet it still lacks the depth of understanding, creativity, and moral reasoning that defines human intelligence. In short, the more we rely on AI, the more we recognize how crucial human decision-making, ethical judgment, and emotional intelligence remain.
The book dives into several key paradoxes related to AI, such as:
- The Justice Paradox: While AI can reduce certain biases, it also amplifies others due to the limitations of data and the societal structures in which it is deployed.
- The Power Paradox: As AI becomes more widely available and more powerful, we paradoxically risk losing control over its effects, as large corporations and a small elite currently hold the most influence over its development and deployment.
- The Superintelligence Paradox: while AI systems can enhance decision-making and problem-solving, it is the collaborative and organized efforts of humans in groups and societies that truly exhibit superintelligent behavior. AI’s potential lies not in creating machines that surpass human intelligence in every respect but in amplifying and optimizing human capabilities through collective action.
Ultimately, my new book aims to provoke deeper reflection on the human role in an AI-driven world. While recognizing AI as a powerful tool, the book underscores that it cannot fully replace the uniquely human qualities of decision-making and creativity. Instead, AI should be designed to enhance human capabilities, with a focus on transparency, accountability, and fairness. Through its exploration of various paradoxes, the book offers insights on how to responsibly navigate the complexities of AI and prompts a critical reassessment of our assumptions about its potential and societal impact. It highlights the need for governance, ethical principles, and strong social frameworks to ensure AI’s benefits are fully harnessed while minimizing risks, ultimately advocating for a balance between technological progress and the protection of essential human values.
What unique role can the ACM Technology Policy Council play? Why should computing professionals join the Council?
The ACM Technology Policy Council plays a pivotal role in shaping global policy discussions around the ethical and responsible use of technology. While the Council serves as a critical bridge between computing professionals, policymakers, and the general public, I believe that its mission goes beyond merely ensuring that the voices of experts are heard—it also empowers computing professionals to understand the broader societal and ethical implications of their work, encouraging them to actively reflect on the impact their technologies have on society. This is especially important as governments and international organizations increasingly seek guidance on how to regulate AI and other emerging technologies.
I believe that all technology must be designed in ways that respect human rights, promote fairness, and remain transparent and accountable, which is what I actively promote in the AI field. This requires not just expertise in technical matters but also sensitivity to the ethical and policy dimensions of technological development. I hope that the ACM TPC will encourage professionals in the field to engage more deeply in policy discussions and recognize their responsibility in shaping systems that serve the public good.
The Council offers a platform for professionals to influence crucial debates about AI governance, ensuring that innovation is balanced with public interest and societal well-being. It must be seen as a platform for professionals to ensure that technological innovation does not outpace the development of regulatory and ethical frameworks, providing a balanced approach to progress that takes into account both innovation and societal goals. To achieve this, computing professionals must engage in discussions about AI governance and ethical frameworks.
Virginia Dignum is a Professor at Umeå University focusing on Responsible Artificial Intelligence and Director of the university’s AI Policy Lab in Umeå, Sweden. Her research focu ses on complex interactions and interdependencies between people, organizations, and technology. Dignum’s publications include the book, Responsible Artificial Intelligence.
Among her volunteer activities, she is a member of the UN High Level Advisory Body on AI and Senior Advisor to the Wallenberg Foundations. Dignum was recently appointed Chair of the ACM Technology Policy Council. The ACM Technology Policy Council sets the agenda for ACM’s global policy activities and serves as the central convening point for ACM’s interactions with government organizations, the computing community, and the public in all policy matters related to information technology.