Artificial Intelligence

Microsoft and the learnings from its failed Tay artificial intelligence bot


Racial and gender bias: Can it be cleansed from AI?
Machine learning models are making predictions and drawing conclusions that show all the signs of systemic bias. Scott Fulton discusses with Karen Forrest a very difficult topic: If we correct the ML data, will it still be usable? Read more: https://zd.net/2YVYkuD

In March 2016, Microsoft sent its artificial intelligence (AI) bot Tay out into the wild to see how it interacted with humans.

According to Microsoft Cybersecurity Field CTO Diana Kelley, the team behind Tay wanted the bot to pick up natural language and thought Twitter was the best place for it to go.

“A great example of AI and ML going awry is Tay,” Kelley told RSA Conference 2019 Asia Pacific and Japan in Singapore last week.

Tay was targeted at American 18 to 24-year olds and was “designed to engage and entertain people where they connect with each other online through casual and playful conversation”.

In less than 24 hours after its arrival on Twitter, Tay gained more than 50,000 followers, and produced nearly 100,000 tweets.

Tay started fairly sweet; it said hello and called humans cool. But Tay started interacting with other Twitter users and its machine learning (ML) architecture hoovered up all the interactions, good, bad, and awful.

Some of Tay’s tweets were highly offensive. In less than 16 hours Tay had turned into a brazen anti-Semite and was taken offline for re-tooling.

“Tay had acquired quite a bit of language and a lot of that language was tremendously racist and offensive. I will not repeat, but to give you an idea, Tay said: ‘The Nazis were right’,” Kelley explained.

“A lot of it was really undesirable. And you know what? Sometimes these things happen.”

See also: Artificial intelligence: Trends, obstacles, and potential wins (Tech Pro Research)

She said it was a perfect example to understand that in creating things like AI, if it’s acquiring information, to make sure it’s resilient.

“As we use AI and ML to power our businesses, that also means that there can be impacts if the AI or ML goes awry,” Kelley added.

“In this case, the impact was not great for Microsoft. Microsoft said, ‘We’re moving Tay out’, issued public apologies, but interestingly, our CEO actually talked to the team and rather than say, ‘Oh, gosh, you guys were terrible. that was horrible’, he said, ‘We learned a lesson’.”

According to Kelley, this was an important lesson for all parties involved, as companies are building out AI and machine learning to make it more resilient to such abuse.

“Learning from Tay was a really important part of actually expanding that team’s knowledge base, because now they’re also getting their own diversity through learning,” she said.

“Looking at AI and how we research and create it in a way that’s going to be useful for the world, and implemented properly, it’s important to understand the ethical capacity of the components of AI.

“As we’re building out our AI, we’re going to run the risk of creating AI that rather than helping us to do more, helping our businesses to be better at cybersecurity, we’re going to start building systems that could actually do things like automate bias.”

To Kelley, preventing this includes looking at fairness.

An example she shared was a hiring tool — using AI to look over CVs to determine a good candidate versus a bad candidate.

She said feeding the AI all of the CVs belonging to those already employed is going to train the algorithm to look for more of the same.

“That doesn’t sound bad off the cuff, but what do we know about engineering and a lot of computer jobs — definitely cybersecurity jobs — are they inherently weighted towards one gender or another?,” she asked. “Yeah, a lot of computer programming jobs, a lot of cybersecurity jobs, the people that are already in place doing those jobs are predominantly male. And cybersecurity, depending on which field you’re in, can be 90%.”

She said being trained on such existing information of what a “good candidate” looks like, the tool is going to be looking for a candidate that kind of looks like the those that are already doing the jobs.

“One big thing with AI is to bring in reliability and safety. Because as we start to use AI to make really big decisions, as we’re using AI to do things like diagnosis to determine whether it’s a cancerous growth on a person or not — we want to make sure that they’re reliable,” Kelley continued.

“What if you get a cancer diagnosis, or you get a [reading] saying you’re cancer free, and it turns out you have cancer — who is going to be accountable for that?

“So all of these things need to be thought about and included in the build up of AI and ML.

“They need to be reliable to misuse too. Tay was getting very racist — we want to make sure that for other cases, that they’re resistant to attack.”

See also: AI ‘more dangerous than nukes’: Elon Musk still firm on regulatory oversight

Another example Kelley shared was a bathroom sensor for washing hands.

“The designers tested it against the designers skin colour. So it was not very inclusive, because if you had skin colour like mine, which is almost ghostly, it worked just great, but the darker that your skin colour was, the less likely it was to work,” she explained. “So you had a whole system of automatic things that were completely not inclusive.”

Also important in building out AI and ML, Kelley said, is privacy and transparency.

Privacy of the person whose data it is and transparency by way of helping other people understand how a model is working.

“This is not just a technology problem; this is actually a bigger problem. And it’s going to need to have a diverse group of people working on the creation of the systems to make sure that they are going to be ethical,” she said.

READ MORE



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.