Commerce

AI Weekly: Meta analysis shows AI ethics principles emphasize human rights


One of the trends that came into sharp focus in 2019 was, ironically, a woeful lack of clarity around AI ethics. The AI field at large was paying attention to ethics, creating and applying frameworks for AI research, development, policy, and law, but there was no unified approach. The committees and groups, from every sort of organization related to AI, that sought to address AI ethics were coming up with their own definitions (or falling apart with nothing to show for their efforts). But working out ethics in AI is not just a feel-good endeavor — it’s critical to helping lawmakers create just policies and laws and guiding the work of scholars and researchers. It also helps businesses stay in compliance and avoid costly pitfalls, know where they should and should not invest their resources, and how to apply AI to their products and services. Which is to say, there’s a profound humanity to it all.

Even as the AI field continues to refine and work out ethics approaches, a report out of Harvard University’s Berkman Klein Center, sought to extract consensus, if not clarity: The work, titled “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for Ai,” is a meta analysis of numerous AI ethics frameworks and sets of principles. The authors wanted to distill the noise down to a set of generally agreed-upon AI ethics principles. (To oversimplify: It’s a sort of Venn diagram.)

The authors, led by Jessica Fjeld, who is the assistant director of the Harvard Law School Cyberlaw Clinic at the Berkman Klein Center, laid out their approach in the document: “Alongside the rapid development of artificial intelligence (AI) technology, we have witnessed a proliferation of ‘principles’ documents aimed at providing normative guidance regarding AI-based systems. Our desire for a way to compare these documents – and the individual principles they contain – side by side, to assess them and identify trends, and to uncover the hidden momentum in a fractured, global conversation around the future of AI, resulted in this white paper and the associated data visualization.”

They looked at 36 “prominent AI principles documents,” from global sources and a diversity of types of organizations, to find the common themes and values therein. They found eight key themes:

  • Privacy
  • Accountability
  • Safety and security
  • Transparency and explainability
  • Fairness and non-discrimination
  • Human control of technology
  • Professional responsibility
  • Promotion of human values

Those are broad strokes, to be sure, and each begs for qualification. The authors do just that over the course of dozens of detailed and fascinating pages, and in brief, in Jessica Fjeld’s concise Twitter thread.

They also produced a large and detailed visualization — a map — of the themes they found, the frequency of the respective mentions of those themes, and the sources that the authors pulled from. In that map, you can see a further breakdown of the keywords under each of the eight themes.

For example, under “Promotion of Human Values” — a rather vague “key theme” — the map lists “leveraged to benefit society,” “human values and human flourishing,” and “access to technology.” That last one, especially, probably resonates with the average person, doesn’t it? And it’s a powerful point: The field of AI apparently believes that giving people access to this powerful new set of technologies is a human value, and moreover, a human value that is explicitly and widely laid out as a matter of documented principle.

Humanity was at the center of much of their findings, actually, especially with a prominent emphasis on international human rights. The paper reads, “64% of our documents contained a reference to human rights, and five documents [14%] took international human rights as a framework for their overall effort.”

Notably, the documents from civil society groups (four out of five) and private sector groups (seven out of eight) were most likely to reference human rights. That’s encouraging, because it indicates that private sector groups aren’t focused, at least on paper, exclusively on profits, but on the larger picture of AI and its impact. Less encouraging is that less than half of the documents from government agencies (six out of 13) references human rights. It appears that there’s advocacy work yet to be done at the government level.

Aside from the identification of those core eight themes, the authors noted that the documents they looked at that were most likely to hit not just some, but all eight, of those themes, were also likely to be more recent. That fact, they wrote, suggests “that the conversation around principled AI is beginning to converge, at least among the communities responsible for the development of these documents.”

The report is meant to be an examination of what already exists more than an articulation of any particular viewpoint, but the authors included a plea to those in AI who are charged with crafting and implementing AI ethics principles:

“Moreover, principles are a starting place for governance, not an end. On its own, a set of principles is unlikely to be more than gently persuasive. Its impact is likely to depend on how it is embedded in a larger governance ecosystem, including for instance relevant policies (e.g. AI national plans), laws, regulations, but also professional practices and everyday routines.”

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Seth Colaner

AI Editor





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.