
Keynote: AI Ethics and Governance
Global Artificial Intelligence Technology 2019 Conference
In late May Wendell Wallach, a researcher at Yale University’s Interdisciplinary Center for Bioethics and Chair for the Center’s working research group on Technology and Ethics, made his keynote address at the 2019 Global Artificial Intelligence Technology held in Nanjing, China, at the Berggruen Institute’s invitation.
Wallach prefaced his remarks by discussing global normative attitudes towards AI ethics. While noting that there are techno-optimists and techno-pessimists, “most of us perceive technology as a source of both promise and productivity, and yet there is considerable disquiet over the trajectory of scientific research and the deployment of certain technologies,” he said.
Wallach began by describing the status quo of AI ethics principles. Citing the “AI for Good” Global Summit in Geneva on May 28, 2019. He noted that there is general international consensus that AI should be developed to maximize welfare for all. Wallach continued that reconciling the current forty-two overlapping frameworks denoting principles of AI ethics, should be a next priority. This entails recognizing when there is a consensus, but also when true differences are in evidence.
There are four primary values appear on nearly every AI ethics list, Wallach continued: 1) Privacy, 2) Accountability, 3) Fairness (minimizing bias), and lastly Transparency. He touched on the nature of public-private interactions and the increasing scrutiny corporations in China and elsewhere, are facing as a result of the complexities of managing particularly privacy in tandem with sustainable AI development.
The present and future use of computer technologies to manipulate public behavior and attitude for marketing and political purposes, particularly through social media, was discussed. With the banning of the use of facial recognition software in San Francisco by the police, Wallach emphasized that the issue of using large databases of personal information for surveillance, by both police and non-police, would become a central issue. While circumstances may lead the governments of countries and regions to different attitudes toward the use of technology for mass surveillance, humanity should not sacrifice accepted norms protecting individual freedoms and the rights of minorities.
Wallach also stressed the challenge of mitigating the presence of algorithmic biases – for example, race and gender-based discrimination – inherent in input data that reiterates existing human biases. Biased outputs should not, for example, be used to make employment decisions. However, he believes it isn’t currently possible to eliminate all biases in either the input or output data efficaciously. We can, nevertheless, do our best to recognize those biases that remain and enact efforts to minimize their impact on decision making.
Concerning transparency, Wallach believes corporations shouldn’t deploy AI for mission critical applications, or any application that may harm people, if they cannot explain the process through which the AI operates or reaches a result. Connecting this thought with the need for accountability, he highlighted the unpredictable and potentially dangerous nature of Complex Adaptive Systems and learning algorithms that are black boxes. Nevertheless, Wallach appreciates that ‘unexplainable’ AI will not be harmful in many circumstances: for example, medical AI, through its significant scientific contributions, should be developed as long as the output and its application are controlled by scientists.
Wallach also discussed “technological unemployment,” a term coined by the British economist John Maynard Keynes in 1930, to capture the longstanding fear that each innovation and technological development would steal more jobs than it creates. Regardless of overall trajectory of job growth or decay caused by AI, Wallach believes serious disruptions in the available jobs due to automation is inevitable. He mentioned how the Chinese government is sensitive to employment concerns, but wage growth is spiking in the tech sector and there is pressure to automate many of those jobs. Nevertheless, he warns against overly automating entire sectors and industries for the promise of small efficiency gains, as these gains may be accompanied by ballooning societal costs to provide for the needs of those no longer employed.
According to Wallach, lethal autonomous weapons, is a pressing topic requiring serious attention. Lethal autonomy is not a weapon system, it is a feature set that selects and destroys a target and can be added to any weapon system including a nuclear weapon or a high-powered munition. Even relatively dumb lethal autonomous weapons will undermine robust command, control, and accountability and may initiate new wars or unintentionally escalate hostilities.
“Machines (algorithms) should not make life and death decisions,” said Wallach. He is skeptical however human-level AI would emerge in the near-future and proposes that a human voice and meaningful human control must be present at all levels of decision-making. He advocates for international treaties that outright forbid development of autonomous weapons.
Due to a significant grey zone resulting from the dual-use nature of most technologies, AI ethical principles necessitate that red lines be set in many areas. The dual-use nature of facial recognition and voice recognition in combination with large databases of personal information should be clearly limited. While each country or region will find its own approaches to adopting AI and ensuring its use is ethical, the rights of individual civilians should be respected. While China, whose view of individual rights differs from that of the West, it is, nevertheless, a signatory to the International Declaration of Human Rights.
Wallach also identified future paths of inquiry that need to be addressed, including the design of moral machines, systems that are sensitive to ethical considerations and factor them into their choices and actions. He outlined the evolution of moral machines from ‘operational morality’ to ‘functional morality,’ and eventually full moral agency. Moral machines with significant capabilities to functionally evaluate choices remain a future possibility, but one that might afford many new applications for deployment.
Finally, Wallach discussed AI governance, by which he means an array of mechanisms for ethical/legal oversight. Actual government regulations should be the last source of oversight and limited to AI activities whose violation requires enforcement. What is needed is an adaptive and agile institutionalized structure for monitoring developments, flagging gaps, and searching for means to address those gaps from a broad array of available mechanisms including: industry standards, corporate self-governance, laboratory practices and procedures, and technological solutions where feasible.
As such, considering the increasing speed of AI development Wallach, proposed a new model for the governance of emerging technologies, developed with Gary Marchant, Director of the Center for Law and Innovation at Arizona State University, called “Governance Coordinating Committees (GCCs).” GCCs would coordinate the activities of the various stakeholders, monitor developments, note best practices and which institutions are taking responsibility for the various concerns, and flag gaps.
In his concluding remarks, Wallach stressed the need for any governance regulatory body to be, above all else, credible and trustworthy. Referencing his role in establishing the first “International Congress on the Governance of Ethical AI”, he invited the audience to join that gathering when it convenes in April 2020.
He finished by imploring the audience to consider AI within a long-term framework, and to recognize that any “short-term gains could be far-outweighed by longer-term costs.” The address ended by expressing the hope that China will be a leader in properly evaluating the ethical implications of AI and in the prioritization of values that serve AI for all.