Google scraps ethics council for artificial intelligence


Google has scrapped an ethics advisory council set up to guide its work in artificial intelligence, after its controversial choice of members provoked a backlash from both inside and outside the company.

The external council was announced only nine days ago but drew instant protests, mainly over its inclusion of the head of the Heritage Foundation, a rightwing US think-tank, and the chief executive of a drone company.

The backlash turned Google’s attempt to show it is taking a responsible approach to AI into a public relations debacle. It has also highlighted the difficulty of deciding who should have a say in controlling a technology with broad social implications, particularly at such a politically polarised moment.

“I think it’s reasonable to reject members of an ethics board based on a history of statements that suggest discrimination, but it’s unreasonable to reject someone who might have a very different point of view,” said Oren Etzioni, head of the Allen Institute for AI, a research institute in Seattle.

The fallout showed that Google is “between a rock and a hard place” as it tries to appease its workers while pursuing new “revenue opportunities” such as a search engine in China, Mr Etzioni added.

The internet search company said last week that it had set up the council to “consider some of Google’s most complex challenges” in deciding when and how to apply its AI. It first laid out an ethical framework for its use of the technology last year after staff rebelled over a contract it had signed to supply image recognition technology for US military drones as part of Project Maven.

READ  Ethical Implications of Artificial Intelligence and the Role of Governments

In an apparent attempt to gain wide support across the political spectrum, Google named an advisory council drawn from very different backgrounds. Members of the eight-person group included Kay Coles James, president of the Heritage Foundation, and Dyan Gibbens, chief executive of Trumbull Unmanned, a drone company which lists “integrating forward-looking solutions” for the US Department of Defense as part of its work.

The inclusion of Ms James reflected an apparent attempt to appease the company’s critics on the right. It has been on the receiving end of a barrage of complaints from the Republicans and others over alleged political bias, including from the White House.

The gesture backfired, however, when news of the appointments brought a storm of protest on Twitter and from employees. An online petition claiming to have the support of 2,380 Google workers called on the company to remove Ms James. It objected to several of her recent positions, including supporting President Donald Trump’s declaration of a national emergency to get funding for his US border wall, and her resistance to transgender rights.

“In selecting James, Google is making clear that its version of ‘ethics’ values proximity to power over the wellbeing of trans people, other LGBTQ people and immigrants,” it said.

Alessandro Acquisti, a professor of information technology and public policy at Carnegie Mellon University in Pittsburgh, quit the board last weekend. “While I’m devoted to research grappling with key ethical issues of fairness, rights and inclusion in AI, I don’t believe this is the right forum for me to engage in this important work,” he said on Twitter.

READ  Build strategic artificial intelligence, but buy for lower research and development risk

Another member, Luciano Floridi, head of ethics at Oxford university’s internet institute, questioned the make-up of the board even more directly, though he did not resign.

Appointing Ms James “was a grave error and sends the wrong message about the nature and goals” of the council, he wrote on Facebook. “From an ethical perspective, Google has misjudged what it means to have representative views in a broader context,” he said.

Google conceded defeat on Thursday, as it said it was disbanding the council and “going back to the drawing board” on how to bring in external views to shape its AI work. “It’s become clear that in the current environment, [the council] can’t function as we wanted,” the company said.



READ SOURCE

LEAVE A REPLY

Please enter your comment!
Please enter your name here