But since its limited launch in the United States, many users have decried the existence of numerous factual errors or even outright false statements in the AI-generated summaries. Viral screenshots on social media show AI Previews confirming that Barack Obama is Muslim, advising adding non-toxic glue to pizza sauce, or forgetting Kenya among African countries starting with K.
For experts, these ” hallucinations » not surprisingly by current artificial intelligence, like Gemini, who still struggle to always separate truth from lies in the data used to train them. Google chief Sundar Pichai also admits that these mistakes are an “inherent characteristic” of generative artificial intelligence that the company has yet to fully eliminate.
But if those omissions worry many observers, Google remains confident. The spokesman talks about “rgenerally very rare research and not representative of most people’s experiences “. The company promises to use these ” isolated examples » to improve AI Review and ensure action is taken against violations of its terms of use.