Artificial Intelligence

Ethical artificial intelligence


By contrast, AI will very likely make machines into independent agents, with their own learning and decision-making capabilities, possibly able to think with higher capability than humankind (superintelligence). We should all be very concerned about this development and ask ourselves whether we would accept a scenario where machines could judge and act motivated by different values than human values, and use these values in combination with their knowledge to impact or modify our societies. Philosopher and Oxford professor Nick Bostrom put it clearly: superintelligent AI agents could dominate us and even bring humanity to extinction.

Several studies show that AI systems (ML, deep learning, enforced deep learning) are not necessarily ethical. AI systems are trained to achieve goals, e.g., to maximize utility, but the way this happens does not necessarily follow ethical principles or human values. An example will serve to illustrate.

Suppose that a machine is trained to form learning groups at school. Based on given training data the machine learns that children from low-income families are less likely to succeed at school and, as a consequence, pre-selects those children into specific learning groups to achieve the most efficient learning environment in the school. In this case, one could argue that the training data generated a bias and thus this bias must be corrected away using, e.g., alternative training data. However, even if no bias has been incorporated into the decision process, and the machine reached its goal to improve the learning environment at school; it is not clear that the way this happened is ethically acceptable. Indeed, the selection criterion (maybe the most powerful predictor based on the training data) is not based on children’s learning skills (probably what humans would care about,) but on their social status, and this is ethically unacceptable in many societies. More generally, machines could also follow unintended instrumental actions that will increase the likelihood to achieve a goal in a second stage, e.g. self- protection or acquisition of necessary resources at the costs of humans.

Even if a machine is instructed not to choose unethically in given scenarios (in the form of clearly specified moral norms, e.g., “If this happens, then do not act that way.”) this is not sufficient to avoid unethical behavior. In many modern applications, AI systems are too complex and humans might be unable to predict how AI systems will reach their goals. Therefore, in this case, humans cannot predict and control the full set of possible scenarios a machine will face (lack of transparency.) On the other hand, if humans control AI agents so they do not become independent decision-makers, then we probably also limit the results they may achieve.

Therefore, the question on how to ensure that AI agents will act ethically is very challenging, and an answer likely lies somewhere between setting strict rules (regulation) and allowing machines to learn with their full uninhibited potential.

We are now at the beginning of a great adventure, and have a choice about how that adventure is to begin. Will we stand by as AI and its companion ML evolve on their own design, or will we, as evolved creatures, specify the parameters of this evolution so that the amazing results certain to come will enhance human existence, rather than constrain it, or in the abysmal abhorrent possibility, destroy it?

The normative issue here is that humans should design machines to ensure ethical learning. Machines should learn that given actions are not ethical and in contrast to fundamental values set by humans. Ethical learning is a necessary condition in order that machines can be beneficial to humans and that humans can guarantee the safety and ensure that machines will judge and act motivated by our values. In general, humans will not be able to control each step of machines’ learning processes, because many of those steps will not be predictable and even not transparent to humans. However, humans can impose transparency and predictability with respect to the moral and ethical systems, and impose ethical learning to ensure that machines learn consistently with the chosen moral and ethical systems.

New York University professor Dolly Chugh and organisational psychologist May C. Kern in their book Ethical Learning: Releasing the Moral Unicorn describe the conditions for ethical learning. First of all, ethical learning requires a central moral identity, i.e., a set of moral norms with respect to which actions are evaluated. Second, ethical learning requires psychological literacy, i.e., the ability to identify the gap between the central moral identity and the actual behaviors or the actual actions. The absence of psychological literacy could lead humans to deny the gap (self-delusion) in order to limit the self-threat generated by it. Finally, ethical learning requires a growth mindset, i.e., the belief that effort and perseverance will be successful in reducing the gap between the central moral identity and the actual behavior.

When it comes to ethical decision-making (which is not equivalent to ethical learning,) American psychologists James Rest and Darcia Narvaez in their book Moral Development in the Professions: Psychology and Applied Ethics identify four distinct psychological processes: moral sensitivity (moral awareness,) moral judgment, moral motivation, and moral character. Moral sensitivity relates to psychological literacy, i.e., to individuals’ ability to identify moral issues, which are gaps between the observed behaviour or actions and moral identity. Moral judgment consists in expressing and assessing solutions for the existing moral issues that have moral justification, i.e., are consistent with the given central morality. Moral motivation consists of individuals’ intention to choose solutions that are morally justified over solutions that are inconsistent with the given moral identity. Finally, a moral character refers to individuals’ capability (strength, courage) to implement their intentions.

The question now is: how can we translate all these conditions for ethical learning and ethical decision-making into machines? This is a very challenging question.

First, what is the set of moral norms that could be used to define the central moral identity of a machine? The central moral identity is crucial, because it guides ethical learning and decision-making, and thus the final outcome of how ethics influence AI agents. The set of moral norms should be general enough to allow for the existing heterogeneity of moral norms among humans but at the same time specific enough to ensure ethical learning and decision making.

To see the importance of this initial step, consider American author and biochemist Isaac Asimov’s Three Laws of Robotics as possible moral norms for an AI central moral identity:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

As discussed by Butler University professors James McGrath and Ankur Gupta in their paper Writing a Moral Code: Algorithms for Ethical Reasoning by Humans and Machines not only the content of the three laws is crucial for their implications, but also the order. For example, having Law 2 first, followed by Laws 3 and 1, could generate a catastrophic world, where humans exploit machines to harm other humans. Therefore, moral identity should be implemented very carefully. However, even if the relevant set of moral principles is identified and carefully implemented, this could be insufficient to induce proper ethical behavior over time. Indeed, rule-based ethics could have many limitations, e.g., be too restrictive or insensitive to its consequences.

For example, if a rule says that the act of attempting to murder is unacceptable regardless of the outcome, the AI agent will not try to protect a human being if this might require the act of attempting to murder another human being. However, the outcome, in this case, might be considered unethical in given societal contexts, i.e., like a policeman that wouldn’t act in the presence of a criminal who kills innocent people.

Therefore, our view is that ethical AI should be a mix of rule-based ethics and learning from actions and consequences. This puts ethical learning and decision-making at the core of ethical AI. That is to say, the central question in this context is how decision algorithms learn, e.g., are trained, including which datasets are used to train them, and how rewards are set. As an example, the design of AI systems should prevent self- delusion, e.g., that AI agents modify the environment in order to increase their reward, without reaching the intended goals. As previously discussed, self-delusion is also present in humans, who deny reality or ignore moral issues (exhibit no moral awareness).

Back to our central question: How should ethical learning and decision-making be implemented in AI? We apply Rest’s framework for ethical decision-making.

Moral awareness in AI-driven applications requires a moral identity, i.e., a set of moral norms. Initially, these norms could be a set of rules, designed to reflect the relevant set of ethical principles in the context of the given application. However, moral identity is not fixed but will modify itself with experience and learning. As an example, a rule could say that the act of attempting to murder is unacceptable. However, if the AI application is a robot that must prevent crimes, it must be trained to refine this rule, because, as we mentioned before, the rule is too strict for the purpose of the given application. This, again, puts ethical learning at the core, and thus the way the model is trained also becomes crucial. Indeed, the AI model should be trained on all sorts of data to limit potential biases, because specific data generated from humans’ actions is not necessarily free of ethical issues (e.g., policemen that murdered following misjudgments). The data should have several parameters of interest and dimensions. Ethical values should be incorporated as part of training algorithms, so the machine’s moral identity converges through learning towards relevant and intended ethical setting.

Moral judgment should be measured in AI-driven systems using rigorous measurement systems. The goal is to detect deviations between the established moral identity, the behavior, and its consequences. Moral intention should be implemented by setting ethical rewards, i.e., the solution that is consistent with the given moral identity should be preferred by AI agents. Finally, moral action should be taken after evaluating various models of ML, and all models should be evaluated against each other.

Thus we have attempted to set forth threads of ideas to suggest, stimulate, and, yes, begin to create an image on the tapestry of the mind, so that others may join with us to fulfill our dream of a future with ML and AI in the service of mankind. This is your invitation to cut cloth with us and turn the dream into reality.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.