Google Explains Bizarre Responses by AI Overviews, Reveals Measures to Improve Feature
Google published an explanation for the debacle caused by its artificial intelligence (AI)-powered search tool – AI Overviews – which saw inaccurate responses being generated for multiple queries, on Thursday (May 30). The AI feature for Search was introduced at Google I/O 2024 on May 14 but reportedly faced scrutiny shortly after for providing bizarre responses to search queries. In a lengthy explanation, Google revealed the probable cause behind the issue and the steps taken to resolve it.
Google’s Response
In a blog post, Google began by explaining how the AI Overviews feature works differently from other chatbots and Large Language Models (LLMs). As per the company, AI Overviews simply does not generate “an output based on training data”. Instead, it is said to be integrated into its “core web ranking systems” and is meant to carry out traditional “search” tasks from the index. Google also claimed that its AI-powered search tool “generally does not hallucinate”.
“Because accuracy is paramount in Search, AI Overviews are built to only show information that is backed up by top web results”, the company said.
Then what happened? According to Google, one of the reasons was the inability of the AI Overviews feature to filter out satirical and nonsensical content. Giving a reference to the “How many rocks should I eat” search query which yielded results suggesting the person to consume one rock a day, Google said that prior to the search, “practically no one asked that question.”
This, as per the company, created a “data void” where high-quality content is limited. With this particular query, there was also satirical content published. “So when someone put that question into Search, an AI Overview appeared that faithfully linked to one of the only websites that tackled the question”, Google explained.
The company also admitted that AI Overviews took reference from forums, which although are a “great source of authentic, first-hand information”, they can lead to “less-than-helpful advice”, such as using glue on pizza to make cheese stick. In other instances, the search feature has also misinterpreted language on Web pages, leading to inaccurate responses.
Google said it “worked quickly to address these issues, either through improvements to our algorithms or through established processes to remove responses that don’t comply with our policies.”
Steps Taken to Improve AI Overviews
The following steps have been taken by Google to improve the responses to queries generated by its AI Overviews feature:
- It has built better detection mechanisms for nonsensical queries, limiting the inclusion of satirical and nonsensical content.
- The company says it has also updated systems to limit the use of user-generated content in responses that could offer misleading advice.
- AI Overviews for hard news topics will not be shown where “freshness and factuality” are crucial.
Google also claimed that it has monitored feedback and external reports for a small number of AI Overviews responses that violate its content. However, it said that the chances of this happening were “less than one in every 7 million unique queries”.
Check out our Latest News and Follow us at Facebook
Original Source