
Google's AI Overview feature faces criticism for inaccuracies and potential dangers.
Google has recently launched an AI Overview feature for Google Search, designed to offer summarized answers directly on the search results page. However, this feature has faced a lot of criticism for providing false, misleading, and potentially dangerous information. Notable errors include treating jokes as factual data, such as suggesting "1/8 cup of non-toxic glue" for pizza, recommending "eating at least one small rock per day", or claiming that "everything on the internet is 100% real" (yeah, this one is kind of ironic...)
Additionally, the AI has been found to summarize incorrect information from unreliable sources, like a disputed historical library page or fan fiction about a non-existent film remake, and sometimes answers slightly different questions, leading to confusion.
Despite these issues, Google claims that most AI Overviews provide high-quality information and has conducted extensive testing to improve the system's accuracy and safety. The company has addressed some errors through updates and plans to avoid showing AI summaries for certain explicit or dangerous topics to mitigate potential harm, however, there are still many concerning cases where it is difficult to recommend the tool as a reliable source and competitor to other alternatives like Perplexity, at least for now.


So far, it looks like I made the right choice by avoiding everything AI like the plague. Frankly, I don't believe it will ever be good enough that I'd actually trust it.
I think it will keep improving. Right now ain't reliable, but also, human sources aren't reliable either. The point for me is about free or proprietary software. As proprietary software it will follow the path of Windows, Mac, Facebook, Twitter, etc., i.e., pure capitalist business abusing users whenever/however they can, just a profit mean. As free software (GPLed, etc.) it could evolve into a useful tool and even reach all AGI potential maybe...