Artificial Intelligence

The Ethics of Tech: Is Artificial Intelligence Racist?


Despite the fact that the meaningful application of ethical AI is still in its infancy (right now, it largely subsists through a crop of experimental software, extensive speculative research, a disparate set of national guidelines and vague platitudes from tech giants like Microsoft and Google), it has been touted as having the capacity to incite legitimate transformative change in the workplace. But therein lies the risk: Declaring that a foundational problem is solved without any probing inquiry quickly shifts resources elsewhere and, in the process, neatly conceals the oppression that remains.

In 2019, the Ontario Human Rights Commission released a report acknowledging the need for increased research to examine the potential impacts of replacing human judgment with crime-prediction AI, especially when policing in Black and Indigenous communities. In July, just weeks after the police killing of George Floyd sparked international protests and demands to defund the police, controversial facial-recognition company Clearview AI ceased offering its services, which were used by a number of law-enforcement agencies, including the RCMP, in Canada after an investigation was started by the Canadian Privacy Commissioner. The U.S.-based company, which became a “viral hit” with law-enforcement agencies in just a few years, had come under fire for populating its database with billions of unregulated images scraped from social media in order to help identify suspects and victims—which also poses a risk to darker-skinned people since facial-recognition software has a proven history of misidentifying them.

Safiya Umoja Noble, co-director of the UCLA Center for Critical Internet Inquiry and author of Algorithms of Oppression, is one of the researchers on the front lines who are revealing the violent repercussions of unaudited AI. She rang the alarm in 2010 about a fundamental flaw in Google’s algorithm that produced racist and pornographic results when the terms “Black girls” and “Black women” were fed into its search engine. Today, she has a disconcerting question about the rapid, unregulated deployment of predictive analytics in every sector of the economy: Who, exactly, is leading the charge?

“There’s now mainstream public understanding that these technologies can be harmful,” explains Noble. “But the resources for researching and studying ethics have gone right back to the original epicentres that sold us the bill of goods. Similar to when big tobacco funded all of its own favourite researchers, that’s kind of what big tech is doing.”

It’s a pivot that has resulted in companies like Google and Facebook—deflecting attention from the role their loosely controlled algorithms played in the outcome of the 2016 U.S. election and Brexit—repositioning themselves as cutting-edge thought lead- ers. In 2014, Google acquired DeepMind, a renowned AI company with a laser focus on research in ethics, and since then has created a sleek blog that touts the company’s social-good initiatives, like a collaboration with the LGBTQ+ organization The Trevor Project that’s intended to build a virtual counsellor-training program. “It’s like ethics has become an industry,” adds Noble.

Last March, the federal government attempted to curb the unbridled, potentially adverse use of AI by announcing a directive that sought to hold AI-driven decision-making to some degree of “transparency, accountability, legality and procedural fairness.” However, innovation often quickly outpaces the development of legislation, and the implementation of these regulations by governing bodies remains spotty at best.

The reality, though, is that ethical AI will never be anything other than a buzzword until it’s capable of moving beyond the perception that only some workers are worthy of its benefits. Often, low-status and low-wage positions, like migrant farm workers and essential-care staff, are left out of the conversation—which means that, once again, women and people of colour are being disproportionately silenced. “We need to be very mindful of the types of voices that aren’t being heard—and [those] that are being catered to,” says Rana.

Last year, the Canadian Agri-Food Automation and Intelligence Network announced a $108.5 million project that promised to create a network of private partners that would use Canada’s strengths in AI to “change the face of agriculture.” While the decision to digitize gave lip service to potentially improving working conditions, in practice, it has resulted in some of Southern Ontario’s migrant farmers being subjected to performance tracking through smartwatches and fingerprinting (a harrowing reality when coupled with the insufficient safety protocols that led to outbreaks of COVID-19 at a number of farms in Leamington, Ont.). “The increased use of automation will have negative consequences—from wage theft to heightened surveillance at work and at home—on the predominantly racialized labour force in the agricultural industry,” explains Chris Ramsaroop, one of the founding members of Justice for Migrant Workers. “While the industry will claim that AI and automation is being implemented to enhance productivity and improve efficiency, from our perspective, it’s based on exerting further control on workers.”

It’s no longer possible to believe that software created in the spirit of techno-optimism can promote social good through its mere existence. Rather, we must place those lofty expectations on the gatekeepers of AI—the people at the top who know there is more on the line than access to jobs that grant upward social mobility. “It’s about abolishing harmful digital systems that are fundamentally exploitative, by virtue of their existence, and dangerous to vulnerable people who are already oppressed,” says Noble. Technology on its own will never birth radical innovation. Change can only be delivered when living, breathing people imagine a new way forward, when we learn to scrutinize the ways we interact with these coded expressions of power and when we begin to demand transparency and accountability of the AI we allow into our lives. And perhaps when we can reclaim some agency by reinserting line items—barista, child-care worker, cashier—into our resumés, we can begin the process of redefining the artificial definition of valuable work experience and unlearning the insidious prejudices that have long plagued our inartificial human experience.

Read more:
Emily Tamfo on How 2020 Became the Year of Letting Go of Lofty Goals
Apple Launches Health Records in Canada



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.