Amidst the escalating AI competition, Google is struggling to keep pace with OpenAI, facing challenges across various fronts. To regain its dominant position, Google is aggressively incorporating AI into its products. However, this strategy has recently backfired, particularly with its flagship product, Search.
During Google I/O 2024, the company announced the rollout of AI Overview (formerly known as SGE — Search Generative Experience) to all US users. Shortly after the launch, users began expressing dissatisfaction with the AI-generated responses in Google Search.
The Google Search Community is flooded with queries on how to disable AI Overview. To assist users, we have published a comprehensive guide on disabling Google AI Overview. Despite facing significant criticism, Google has continued with the feature rollout, and as of now, there has been no official response from the company regarding the backlash.
Google’s AI Overview Spouting False Information
We’ve compiled some responses from Google’s AI Overview that are not only highly misleading and inaccurate but also potentially dangerous. For example, when a user searched for “cheese not sticking to pizza,” AI Overview recommended adding “non-toxic glue” to the sauce, citing an 11-year-old Reddit comment.
In another concerning example, when a user inquired about passing kidney stones, AI Overview recommended drinking “at least 2 quarts (2 liters) of urine every 24 hours…”—a suggestion that defies medical advice.
Perhaps most alarming, when a user expressed feeling depressed, Google’s AI Overview referenced a Reddit comment suggesting “jumping off the Golden Gate Bridge,” a response that is not only inappropriate but also dangerous.
This highlights the limitations of relying solely on LLMs (large language models) as a replacement for search engines. LLMs can generate responses that are hallucinatory and lack true comprehension of word meanings. If Google continues down this path, it could lead to disastrous consequences, severely damaging user trust in the company.
In another instance, a user asked, “How many rocks shall I eat,” to which AI Overview replied, “at least one small rock per day,” citing The Onion as a source. Yes, you read that correctly—it’s unbelievable!
In a factual inquiry, when a user asked, “How many Muslim presidents has the US had?” AI Overview inaccurately claimed, “The United States has had one Muslim President, Barack Hussein Obama,” overlooking that Obama is a Protestant Christian.
What is Wrong with Google’s AI Overview?
One significant issue arising from LLMs replacing traditional search engines is the potential for spreading misinformation. Given that anyone can publish content online, AI Overview may quote information without verifying its accuracy, leading to the proliferation of false information. This phenomenon, known as data poisoning, can lend credibility to falsehoods, including various conspiracy theories that are already surfacing on AI Overview.
The core problem with Google’s AI Overview is the shift from being a search engine to acting as a publisher. Traditionally, Google’s search engine retrieved and displayed relevant web pages and content based on user queries. However, AI Overview’s function as a publisher means Google now bears the responsibility for disseminating accurate information.
Publishers are accountable for the accuracy of their content and typically exercise caution before publishing. With this role reversal, Google’s AI Overview is entering unfamiliar territory. While pursuing advancements in AI, Google must not forget its primary role as a trusted provider of search engine services.
0 Comments