Big Data

AI ethics research conference suspends Google sponsorship


Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


The ACM Conference for Fairness, Accountability, and Transparency (FAccT) has decided to suspend its sponsorship relationship with Google, according to conference sponsorship co-chair and Boise State University assistant professor Michael Ekstrand. The organizers of the AI ethics research conference came to this decision a little over a week after Google fired Ethical AI lead Margaret Mitchell and three months after the firing of Ethical AI co-lead Timnit Gebru. Google has subsequently reorganized about 100 engineers across 10 teams, including placing Ethical AI under the leadership of Google VP Marian Croak.

“FAccT is guided by a Strategic Plan, and the conference by-laws charge the Sponsorship Chairs, in collaboration with the Executive Committee, with developing a sponsorship portfolio that aligns with that plan,” Eckstrand told VentureBeat today in an email. “The Executive Committee made the decision that having Google as a sponsor for the 2021 conference would not be in the best interests of the community and impede the Strategic Plan. We will be revising the sponsorship policy for next year’s conference.”

The decision followed days of questions about whether FAccT would continue its relationship with Google following its treatment of Ethical AI team leaders. The news first emerged Friday, when FAccT program committee member Suresh Venkatasubramanian tweeted that FAccT would pause its relationship with Google.

Putting Google sponsorship on hold doesn’t mean the end of sponsorship from Big Tech companies, or even Google itself. DeepMind, another sponsor of the FAccT conference that incurred AI ethics controversy in January, is also a Google company. Since its founding in 2018, FAccT has sought funding from Big Tech sponsors like Google and Microsoft. The Ford Foundation and MacArthur Foundation are also named among frequent sponsors. An analysis that compares Big Tech funding of AI ethics research to Big Tobacco released last year found that nearly 60% of researchers at four prominent universities have taken money from major tech companies.

According to the FAccT website, Gebru, who was a cofounder of the organization, continues to work as part of a group advising on data and algorithm evaluation and as a program committee chair. Mitchell is a program co-chair of the conference and a FAccT program committee member. Gebru was fired from her role at Google in December 2020, following disputes around factors like the lack of diversity in tech companies and the review of “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In addition to recognizing that pretrained language models may disproportionate harm marginalized communities, it questions whether performance on benchmark tests qualifies as genuine progress, and heightens the potential for misuse or automation bias.

“If a large language model, endowed with hundreds of billions of parameters and trained on a very large dataset, can manipulate linguistic form well enough to cheat its way through tests meant to require language understanding, have we learned anything of value about how to build machine language understanding or have we been led down the garden path?” the paper reads.

Gebru is one of two primary authors of the paper, which was accepted this week for publication at FAccT. Her lead co-author is University of Washington linguist Emily Bender, whose writing about potential shortcomings of large language models and the need for deeper criticism received an award last summer from the Association for Computational Linguistics.

A copy of the paper VentureBeat obtained last year from a source familiar with the matter lists Mitchell as a co-author, as well as Google researchers Mark Diaz and Ben Hutchinson, a trio with backgrounds in language analysis and models. Mitchell may be known today for her work in ethics, but she is most highly cited as a computer vision and NLP researcher and is the author of a 2008 master’s thesis on text generation at the University of Washington. Ben Hutchinson worked with co-authors from the Ethical AI team at Google on a paper that found bias in NLP models that disfavors people with disabilities in sentiment analysis and toxicity prediction. Mark Diaz has examined age-related bias found in text.

Bender and Gebru are listed as primary coauthors in various versions of the paper. A version of the paper made available ahead of the conference by the University of Washington also lists “Shmargaret Scmitchell” as an author.

Fallout from the firing of Gebru, a prominent researcher into algorithmic oppression and one of the only Black women to work as an AI researcher at Google, led to public opposition from thousands of Googlers and accusations of racism and retaliation. The incident also sparked questions from members of Congress with a documented interest in regulating algorithms. And it led researchers to question the ethics of receiving ethics research funding from Google. Experts in AI, ethics, and law told VentureBeat a range of policy changes could come about as a result of Gebru’s dismissal, including support for stronger whistleblower laws. Shortly after being fired, Gebru spoke about the idea of unionization as a means of protection for AI researchers, and Mitchell was a member of the Alphabet Workers Union formed in January 2021.

OpenAI and Stanford University researchers working with experts warned last month that the creators of large language models like Google and OpenAI have only a matter of matter of months to set standards for their ethical use before replications begin to circulate.

Other papers published at FAccT this year include analysis of common obstacles to data sharing practices in African nations, a review of an algorithm impact assessment made by Data & Society’s AI on the Ground team, and research that examines how government repression and censorship impact text data regularly used for training NLP models.

In other recent AI research conference activity, organizers of NeurIPS, the most popular annual machine learning conference, told VentureBeat the organization plans to revise its sponsorship policy following questions surrounding Huawei, a NeurIPS sponsor, reportedly making Uighur Muslim detection system for Chinese authorities.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.