Wednesday, November 19, 2025
HomeTech NewsSundar Pichai Warns Users About AI Errors as Google Pushes New AI...

Sundar Pichai Warns Users About AI Errors as Google Pushes New AI Tools

This study shows how major technology leaders frame their views on artificial intelligence, and Sundar Pichai’s recent comments stand out because he speaks directly about the risks tied to the rapid growth of AI. In a rare and direct interview with the BBC, Pichai encouraged users to stop trusting AI results at face value. His message focused on accuracy, accountability, and the need for a strong information ecosystem that can balance the output of AI systems with dependable sources.

Pichai’s language was clear. He said users should never “blindly trust” answers from AI models because AI can produce errors. His statement reflects a concern shared by researchers, governments, and educators. He explained that while AI tools can help with writing or brainstorming, users must understand their limits. This interview came at a critical time because Google is pushing forward with its next generation of consumer AI through its Gemini platform.

The interview also highlighted several internal conflicts within the tech sector. There is pressure to move faster, pressure to compete with rivals, and pressure to avoid harmful outcomes. This creates tension between innovation and safety. Companies like Alphabet must respond to public expectations while also working to reduce the risks.

Pichai’s message offered insight into how Google views this challenge. He framed it as an effort to stay “bold and responsible” at the same time. This balance will continue to shape the company’s actions as AI adoption increases.

Read more on our article of, ChatGPT Group Chats: Collaborate with AI and Others Easily, published on November 18th 2025, SquaredTech.

Sundar Pichai Explains Why AI Needs Careful Use

Pichai began by explaining why AI tools cannot be treated as flawless systems. He reminded users that AI models are “prone to errors” because they rely on patterns rather than direct understanding. These models generate answers by predicting the most likely combination of words based on previous training data. That process can create creative responses but can also produce statements that sound correct yet lack factual accuracy.

He said this limitation shows why people should rely on a mix of tools instead of depending fully on AI. He explained that Google Search and other established products remain important because they connect people to verified information, scientific sources, and original reporting. A strong information ecosystem can prevent confusion, especially during fast news cycles or sensitive situations.

Pichai also pointed to Google’s disclaimers. These appear on AI tools to warn users that mistakes can happen. He said the disclaimers reflect transparency, but added that transparency does not eliminate criticism. Users continue to raise concerns because inaccurate AI responses can spread quickly and create misinformation.

The most visible example came from Google’s own AI Overviews feature. This tool summarizes search results using generative AI. When it launched, several responses went viral because they were comically wrong or dangerously incorrect. These issues raised questions about readiness, reliability, and testing.

Professor Gina Neff
Source: Queen Mary University of London

Researchers and AI governance experts echoed the same concerns. Gina Neff, a professor of responsible AI at Queen Mary University of London, pointed out that generative tools often create answers that aim to satisfy the user rather than prioritize accuracy. She said this behavior might be fine for movie suggestions but becomes problematic for topics related to health, science, or mental wellbeing.

Neff argued that companies cannot ask the public to verify errors for them. She compared the situation to a student marking their own exam while ignoring the damage caused in the process. Her message is that responsibility should remain with developers and companies, not with casual users.

These concerns summarize a major challenge in AI adoption. Users expect fast and accurate answers. Companies want to move quickly to maintain market leadership. Researchers worry about the consequences of misinformation. These pressures make Pichai’s statement important because it sets a public expectation. He is telling users to maintain caution instead of assuming AI will replace traditional sources.

Google Pushes Forward with Gemini as Competition Intensifies

The tech industry has been paying close attention to Google’s rollout of Gemini 3.0. This model aims to compete directly with ChatGPT and reclaim market share in consumer AI. Gemini 3.0 represents Google’s attempt to strengthen its position after losing momentum during the early rise of generative AI.

In May, Google introduced a new feature called AI Mode inside Search. This integrates the Gemini chatbot with search results. The goal is to create an experience that feels more conversational and helps users receive explanations in a natural language style. Pichai called this integration a “new phase” in the shift to AI platforms. The update shows how Google plans to bring more AI features into its core products.

However, Google’s push into AI is also shaped by growing competition. Services like ChatGPT, Perplexity, and Copilot have attracted large audiences. They provide direct answers without requiring traditional web searches. This shift threatens Google’s long-standing leadership in online information access.

Pichai’s comments support research from the BBC earlier this year. The BBC tested major AI chatbots to see how well they summarized news stories. The models reviewed included ChatGPT, Copilot, Gemini, and Perplexity. All four systems produced summaries with “significant inaccuracies.” This research confirmed that generative AI has consistent issues with factual reliability.

Pichai said the challenge comes from the speed of technological progress. Companies push development forward because consumers demand faster and smarter features. At the same time, companies must build internal protections, test their tools thoroughly, and prevent harmful outcomes. These tasks cannot always move at the same pace.

He described this situation as a tension between progress and protection. Alphabet tries to manage this by being “bold and responsible at the same time.” For Alphabet, this means expanding AI features while increasing investments in AI safety. He said Google now increases safety spending in direct proportion to its AI investment. This includes tools that can detect whether an image was created through AI.

This focus on safety reflects broader concerns about misinformation, fraud, and synthetic media. As artificial images, voices, and videos become easier to produce, spotting manipulation becomes harder. Google aims to respond through technology that verifies authenticity.

Concerns About Power, Competition, and Control in AI Development

The conversation with the BBC also touched on wider questions about control. The interviewer asked Pichai about past comments from Elon Musk, who warned that DeepMind could create an AI “dictatorship.” DeepMind later became part of Google. Musk argued that a single company should never control a technology as powerful as AI.

Pichai agreed with the concern in principle. He said no single company should own a technology with this much influence. However, he argued that the current landscape includes many companies, each building their own systems. He said the industry is far from a scenario in which one entity dominates all AI development.

This view matches current market conditions. Major players include OpenAI, Google, Anthropic, Meta, Amazon, and a growing number of startups. Each company uses different approaches, different models, and different goals. However, researchers continue to worry about concentration of power, training data control, and the influence of large companies on global information systems.

For Pichai, competition is a sign of balance. For critics, competition does not eliminate concerns about accuracy, misinformation, or safety. This debate will continue as companies introduce new models.

As the tech sector moves into this next stage of AI development, users face an important shift in responsibility. They must learn how to interpret AI answers with care, cross check with established sources, and treat AI as a tool rather than an authority.

At SquaredTech, we track these changes because they influence how people search for information, how companies communicate online, and how technology shapes daily decisions. The rapid growth of AI can create opportunities, but it also requires attention to accuracy and trust. Sundar Pichai’s comments acknowledge this challenge and remind users that AI still depends on human oversight.

For more Updates: Artificial Intelligence

Wasiq Tariq
Wasiq Tariq
Wasiq Tariq, a passionate tech enthusiast and avid gamer, immerses himself in the world of technology. With a vast collection of gadgets at his disposal, he explores the latest innovations and shares his insights with the world, driven by a mission to democratize knowledge and empower others in their technological endeavors.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular