Artificial Intelligence

Can Artificial Intelligence be moral?, Garage


CAN artificial intelligence (AI) be moral? In my opinion, no. Should this prevent us from establishing how to morally use AI? Absolutely not. In fact, the absence of AI moral capability should drive our need for explicit and clear frameworks for the moral use of AI outputs. I use the term “moral”, somewhat sensationally, to emphasise the use of AI as a tool of judgment (decision making or decision support) where outcomes need to adhere to principles of “right” and “wrong”. However, in reality, such polarity is not always practicable and the terms “ethical” and “fair” are more familiar and more commonly used.

The discourse on AI ethics, or to be more specific, fairness, is not new. The more we embed AI in our lives, and the more we understand the possibilities that AI may bring, the more we want to assure that AI decisions, whether made in a supervised or unsupervised manner, are done so fairly, ethically, morally. Why? Because we want – no, we demand – fairness in our lives.

How then should companies balance this perceived dichotomy of AI innovation and governance, thereby achieving the exploration of growth possibilities while concurrently preventing unknown mishaps from occurring? In a counterintuitive manner, by slowing down and appreciating not only the possibilities of AI but more acutely its limitations.

The most important limitation, in my view, is at the core of the definition of Computation Intelligence, which is described in Computational Intelligence: A logical approach as being “any device that perceives its environment and takes actions that maximise its chances of successfully achieving its goals”. An AI algorithm is a statistical engine that searches for patterns within data that result in the optimal statistical outcome.

READ  How Singapore Farms Use Artificial Intelligence

For example, in the scenario of predicting which wood samples would have the highest durability or which individuals are likely to buy a particular type of coffee, an AI algorithm, while dependent on the quality of data provided, treats both the wood and the individuals in a similar manner. It does not discriminate living from non-living; it is incapable of value judgment. The AI algorithm will statistically identify the patterns from which inference can be made on the wood with the highest predicted durability or individual with the likelihood to purchase coffee, as the case may be.

In other words, there is no possibility of moral judgment with respect to the impact of either algorithm – it was simply not considered. Guidelines were necessary, not to change AI per se but to provide a framework for the developers of AI and the users of AI output in assuring that moral and materiality questions have been taken into consideration.

The world has responded accordingly. Governing principles and guidelines have been developed by Governments and Commercial entities alike, including Singapore’s PDPC’s 2019 Model AI Governance Framework and the Monetary Authority of Singapore’s (MAS) earlier 2018 Fairness, Ethics, Accountability and Transparency (FEAT) principles. FEAT was the attempt to bridge the then lack of regulatory guidelines with respect to the use of AI in the financial sector. A simple, succinct, set of questions that we should be asking ourselves as we set about to use AI in a governed manner.

READ  Using Artificial Intelligence, This Start-Up Is Helping Enterprises Connect Better

Nonetheless, the introduced principles, while indeed addressing the gap of overarching governance framework, made another challenge evident. The questions of fairness and ethics require each AI algorithm to undergo a lengthy process of subjective review. How should companies incorporate such frameworks in a systematic manner in order to operationalise AI on a large scale, where algorithms may need to adapt promptly to behavioural changes.

Principles are just not enough. Named after the Roman goddess of truth, the Veritas initiative was created. This was envisioned to be the next stage, going beyond the FEAT principles. The so-called “truth” was the creation of systematic tests that would assess the AI “morality” automatically. MAS recently announced the completion of the first phase of the Veritas initiative and published the FEAT Fairness Principles Assessment Methodology and Case Studies.

The concept of fairness is the intersect of multiple criteria and objectives. History impacts present behaviour. Identified behaviour may differ entirely from our perception of what our behaviour has been or is. Data is likely to be eternally imperfect. Algorithms will always have errors.

Can AI be moral? No. Although perhaps one day, we will unlock the mathematics of morals and this will be a tangible possibility. In the interim, companies can achieve moral use of AI by establishing governance frameworks as well as to concurrently achieve large-scale AI operationalisation by deconstructing AI into two pillars:

1. Build: Incorporation of FEAT-like principles in the development and independent validation of new AI algorithms as part of a standard operating model.

2. Run: Automatic and systematic validation of AI output against an existing metric of materiality and severity. Akin to a driverless train, when in doubt or passed an acceptable tolerance threshold, “stop” and allow a human to intervene.

READ  Chip Stocks Upgraded To Buy On Artificial Intelligence

What is truly fair? The ideal of trustworthiness is perhaps a far better and more pragmatic high watermark to aspire to than fairness. What we need from AI is trust and honesty – at the bare minimum, mechanisms and controls that enable us to identify when things go wrong so that we can intervene.

The author is the former MAS chief data officer who at the time pioneered the FEAT principles and the Veritas initiative.





READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.