Marketing

Google’s Widely Rolled Out AI Search Engine Spouts Misinformation


The world’s most popular search engine is getting the facts wrong.

Google’s decision to make its AI-generated search results, AI Overview, the default experience in the U.S. was met with swift criticism after people’s search queries have been plagued with errors, concerning advice and misinformation.

In one example, when searching “what is in Google’s AI dataset,” Google’s AI summary said its AI model was trained on child sexual abuse material.

Google also erroneously claimed that Barack Obama is Muslim, provided incorrect advice on treating rattlesnake bites, and suggested using glue in pizza cheese when people searched “cheese not sticking to pizza.”

“You can add 1/8 cup of non-toxic glue to the sauce to give it more tackiness,” Google answered.

The AI search engine also said geologists recommend eating one rock per day.

To be fair, many gen AI products start riddled with inaccuracies before they grasp the intricacies and nuances of human language and quickly learning. But Google’s haste to roll it out widely opens it up to more criticism.

“The pitfalls of infusing search with AI at this point run the gamut from creators who resist the use of their work to train models that could eventually diminish their relevance, to incorrect results put forth as fact,” said Jeff Ragovin, CEO of contextual targeting provider Semasio. “On this one, it looks like Google was a bit premature.”

The AI response on President Obama violated Google’s content policy which include careful considerations for content that may be explicit, hateful, violent, or contradictory of consensus on public interest topics, a Google spokesperson told ADWEEK. The tech giant has blocked the violating overview from appearing on that query.

This website uses cookies. By continuing to use this site, you accept our use of cookies.