A few weeks ago, AI Singapore hosted Andreas Deppeler, Adjunct Associate Professor at NUS Business School and Director of Data and Analytics at PwC Singapore, in a two-part webinar series for staff and apprentices. In four hours of lectures and Q&A, Prof Andreas walked the audience through the vast landscape of AI ethics and governance. In this article, I penned down the highlights of the sessions. If you prefer to go straight to the lectures, you can view the recordings at the end of the article.
What could go wrong with AI?
AI as a technology is both powerful and finding increasing applications in our lives. Drawing upon two primary sources – the work done by computer scientist Stuart Russell  and the privately funded organisation Partnership on AI  – Prof Andreas began with a comprehensive high level look at where AI might cause harm, intended or unintended. This was followed by a series of documented cases where problems in explainability, bias and security have manifested themselves in applications involving AI. From the examples quoted, it is worth noting that even the major technology players like Amazon and Apple have not been immune to committing such errors in their initial deployments.
Another area to pay attention to is the displacement of jobs due to AI automation. While experts generally agree that there will be disruption in the market, there is no consensus on the expected scale of it.
In the development of automated vehicles, the moral decisions that a machine has to make in collision avoidance and life preservation come under scrutiny. The Moral Machine experiment  was an attempt to collect large-scale data on how citizens of different countries would want autonomous vehicles to solve moral dilemmas in the context of unavoidable accidents. The results have been illuminating as it showed up distinct regional cultural differences when it came to deciding who should be sacrificed and who should be saved.
Ethics : Drawing up the principles
If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose which we really desire.Norbert Wiener, 1960
While the concern that machines do not do what is “right” is not new, Prof Andreas traced the first serious conversation on safe and beneficial AI to the AI Safety Conference (2015) in Puerto Rico organised by The Future of Life Institute , a gathering of academics and industry players. The conference led to the publication of an open letter exhorting the development of AI that is not only capable but which also maximises societal benefit . Since then, several non-profit organisations for safe and beneficial AI have also been founded.
A second conference in 2017 in Asilomar, California, produced 23 principles covering wide-ranging themes in AI . Two years after that, the European Commission also presented its Ethics Guidelines for Trustworthy Artificial Intelligence in April 2019  and the OECD followed suit just a month later with its OECD Principles on AI . At almost the same time, the Beijing Academy of Artificial Intelligence (BAAI) also published its Beijing AI Principles .
From these publications, researchers have identified five common themes or overarching principles of ethical AI : beneficence, non-maleficence, autonomy, justice and explicability . Interestingly, subsequent work found that the first four principles correspond with the four traditional bioethics principles, and they are joined by a new enabling principle of explicability for AI .
Governance : Operationalising the principles
Typically, the principles and guidelines published are non-legally binding but persuasive in nature. To date, the German non-profit organisation AlgorithmWatch has compiled more than 160 frameworks and guidelines for AI use worldwide . It found that only ten have practical enforcement mechanisms. There is a need to go beyond the PR nature of the guidelines and operationalise them. On a related note, five types of risks that are already encountered or foreseeable have been identified : (1) ethics shopping, (2) ethics bluewashing, (3) ethics lobbying, (4) ethics dumping, and (5) ethics shirking . These risks undermine the best efforts in translating principles into practices.
Ethically Aligned Design
In March 2019, the IEEE launched Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition (EAD1e) . It is a global treatise crowd-sourced from experts in business, academia and policy makers over three years. At almost 300 pages, it is organised into three pillars (reflecting anthropological, political and technical aspects) and eight general principles (human rights, well-being, data agency, effectiveness, transparency, accountability, awareness of misuse and competence).
Prof Andreas spent some time diving deeper into the sixth general principle – accountability. This is especially relevant to developers as AI applications have been known to deviate from their intended use and will likely continue to do so on occasions, despite the best of intentions. The question of the legal status of accountability inevitably comes up. For example, government and industry stakeholders should identify the types of decisions and operations that should never be delegated to AI systems, among other discussion points.
In February 2020, the IEEE announced the completion of the first phase of its work on the Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS) . It aims to offer a process and define a series of marks by which organisations can seek certifications for the processes around the Al products, systems and services they provide. This is a positive development, in Prof Andreas’ view, as he sees the possibility of Singapore contributing in this space.
Model AI Governance Framework
The Model AI Governance Framework  published by the Personal Data Protection Commission (PDPC) is the framework that most developers in Singapore are familiar with. The second edition was released in January 2020 at the World Economic Forum Annual Meeting in Davos, Switzerland. It is voluntary in nature and provides guidance on issues to be considered and measures which can be implemented to build stakeholder confidence in AI and to demonstrate reasonable efforts to align internal policies, structures and processes with relevant accountability-based practices in data management and protection. It consists of two guiding principles ( (1) AI that is explainable, transparent and fair, (2) AI that is human-centric ), and four guidance areas ( (1) internal governance structure and measures, (2) appropriate level of human involvement, (3) operations management, (4) stakeholder interaction and communication ) .
Open Source Tools
Beyond discussing principles, developers are most interested in the available tools that can help them in their work. IBM AI Fairness 360 , IBM Explainability 360  and IBM Adversarial Robustness 360  are open source Python libraries from Big Blue. Similarly, Microsoft has Microsoft Fairlearn  and Microsoft InterpretML . Developers can check them out and evaluate them for their own needs before developing their own Python packages.
Finally, here are the recordings of the two parts of the webinar series. Do catch the lively Q&A sessions at the end of each lecture when Prof Andreas fielded questions from our managers, engineers and apprentices.
- Human Compatible : Artificial Intelligence and the Problem of Control by Stuart Russell (2019)
- From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices by Jessica Morley et al
- Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical by Lucian Floridi
- Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims by Miles Brundage et al