Artificial Intelligence

Ethical AI | Pipeline Magazine


By: RJ Talyor

This holiday season, more than 59 percent of retailers will introduce new
methods of presenting their products. Among those, 23 percent plan to fundamentally transform the way they present their products. 

What’s the one tool those retailers will use to determine how to measure their new presentation methods? Artificial intelligence (AI). AI has the power to analyze billions of data points in the
blink of an eye and translate them into actionable insights. 

For a human, this would take an entire lifetime. With tools such as natural language processing and computer vision, AI can translate data into marketing components that are guaranteed to provide
the greatest return. As a result, marketers can streamline strategy and execute campaigns at the right time and place with the right copy and photo.

But there’s a catch. As AI becomes more common across multiple industries, ethical questions surrounding its creation, transparency and bias become more pressing.

This is because AI was not born out of thin air. It was created by humans and, within it, carries human biases. It measures what a human tells it to measure, aggregating a lifetime of knowledge
based on a human directive. So, if that human directive is biased, the AI is biased and will learn more through that biased lens. Even if the AI is built with noble intent, humans can still
develop this technology with objective and personal opinions that make it deeply flawed.

Let’s look at an example to see how this bias might be expressed. Say that a company wants to create an ad campaign that promotes body positivity along with its product. That company may use a
collection of photos of 12 women with different body types and skin color. The marketers are tasked with understanding which pictures perform best based on consumer engagement, so AI is
prescribed to measure all the elements of each photo.

Do you see the problem already? AI labels the elements of the photos and prescribes values to what it sees. Essentially, it could label what the human has told the AI to see in the photo based on
subjects’ body type or size, skin color, hair length and more. Based on these insights, AI can tell what type of body or type of individual drives the highest return.

This is not ethical.

Of course, it’s necessary for humans to have discussions about diversity in ads and to account for different body types and ethnicities in their marketing. There are many brands that do this very
successfully. But when a machine does it, we need to carefully examine its practices to ensure they’re ethically sound. When a machine does it, there is no conversation or discussion. Instead, a
machine has been programmed to tell the difference between which photos are “good” or “bad” based on how many conversions it’s driving.

Marketers must use AI in a bias-free manner. This can be done with the right tools and the right humans behind them.

If you want to deploy AI to improve overall marketing strategy and grow your return on investment, success starts with ethics. And ensuring ethical AI implementation requires several steps.

Implement guidelines first

This seems obvious, but it’s still important. And there are many resources available that are designed to accomplish this exact goal.

For instance, in April 2019, the High-Level Expert Group on Artificial Intelligence—made up of 52 representatives from academia, civil society and industry who were appointed by the European
Commission to issue recommendations on implementing artificial intelligence in Europe—presented “Ethics Guidelines for Trustworthy Artificial Intelligence.”

The report outlines seven key requirements that AI systems must meet in order to be deemed “trustworthy.” This includes human agency and oversight, transparency, diversity and more. These
guidelines are a great place to start when implementing AI. One continental legislative body, however, cannot ensure the entire world follows its guidelines. AI is built in different nations
intrinsically and with different contending biases.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.