Surprising No One, Google’s AI Overviews are Ludicrous

Google has begun rolling out AI-powered search results, and it’s not going super well. While many of the screenshots floating around social media are probably fake, there are some real examples that are pretty troubling, as Kylie Robison reports for The Verge:

Imagine this: you’ve carved out an evening to unwind and decide to make a homemade pizza. You assemble your pie, throw it in the oven, and are excited to start eating. But once you get ready to take a bite of your oily creation, you run into a problem — the cheese falls right off. Frustrated, you turn to Google for a solution.

“Add some glue,” Google answers. “Mix about 1/8 cup of Elmer’s glue in with the sauce. Non-toxic glue will work.”

(Google pulling from an Onion article about eating rocks is also pretty wild.)

Kyle Orland, writing at Ars Technica:

Factual errors can pop up in existing LLM chatbots as well, of course. But the potential damage that can be caused by AI inaccuracy gets multiplied when those errors appear atop the ultra-valuable web real estate of the Google search results page.

“The examples we’ve seen are generally very uncommon queries and aren’t representative of most people’s experiences,” a Google spokesperson told Ars. “The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web.”

I think anyone paying attention to LLMs would have seen this coming, yet Google shipped it, on some of the most valuable real estate on the web. I wonder how folks there are feeling now that the company is scrambling to manually remove some results.