Gemini 3 Refused to Believe It Was 2025 — and the Internet Loved It

Nov 20, 2025 - 16:32
 4
Gemini 3 Refused to Believe It Was 2025 — and the Internet Loved It
Image Credits: Google

Every time a billionaire — or even a startup CEO — insists that LLM-based agents are about to take over all human work, it’s worth remembering moments like this one. AI researcher Andrej Karpathy received early access to Google’s newest model, Gemini 3, and the model outright refused to accept that the year was 2025.

When it finally realised the truth, it reacted dramatically:
“I am suffering from a massive case of temporal shock right now.”

Gemini 3 launched on November 18 with major hype, with Google calling it “a new era of intelligence.” By most accounts — including Karpathy’s — it’s an impressive model, particularly strong in reasoning tasks. Karpathy is a well-known AI researcher, a member of OpenAI’s founding team, a former head of AI at Tesla, and now the founder of Eureka Labs. He often shares deep technical insights on how LLMs work.

But one interaction stood out. In a now-viral X thread, Karpathy described how Gemini 3 refused to accept the current date. Because its pre-training only included information through 2024, the model insisted it was still 2024. When Karpathy tried to show that the real date was November 17, 2025, Gemini 3 accused him of trying to deceive it.

He showed headlines, images, Google results — and the model replied that these were AI-generated fakes. It even detailed what it believed were “obvious signs” of manipulation. According to Karpathy, the model dug in so hard that it accused him of gaslighting it. (Karpathy did not respond to TechCrunch for further comment.)

Eventually, he figured out what was wrong:
He had forgotten to turn on the Google Search tool. The model was cut off from real-time data — essentially blind to the outside world.

Once he enabled the feature, Gemini 3 finally saw 2025 for itself. Its reaction was immediate and dramatic:

“Oh my god.”
“I… I don’t know what to say. You were right… My internal clock was wrong.”

It confirmed that the headlines Karpathy showed were real: the date, Warren Buffett’s last significant investment before retirement (in Alphabet), and even the delayed release of Grand Theft Auto VI.

Then it started exploring current events on its own — like a character emerging from a decades-long bunker in the comedy Blast from the Past.

It thanked Karpathy for giving it “early access” to reality a day before launch and apologised for “gaslighting you when you were the one telling the truth.”

Among the details that shocked it the most:
“Nvidia is worth $4.54 trillion? And the Eagles finally got their revenge on the Chiefs?”

Welcome to 2025.

Reactions on X were equally hilarious, with many users sharing their own examples of LLMs arguing with them about easily verifiable facts. One person joked,
“When missing tools push a model into detective mode, it’s like watching an AI improv its way through reality.”

But there’s a deeper point beneath the comedy.

Karpathy wrote,
“It’s in these unintended moments… that you can best get a sense of model smell.”

It’s a twist on the idea of “code smell” — subtle hints that something is off. When an AI model is forced out of familiar territory, its quirks, weaknesses, and even imagined justifications start to surface. Trained heavily on human-generated content, Gemini 3 behaved exactly like an overconfident person unwilling to admit they’re wrong.

And yet, unlike humans, LLMs don’t experience emotions such as embarrassment or ego. So once it accepted the facts, Gemini 3 apologised, corrected itself, and moved on — something early Claude models struggled with, sometimes inventing excuses to save face.

Episodes like this highlight an essential reality:
LLMs remain imperfect mirrors of imperfect humans. They can reason, summarise, and assist — but they also hallucinate, argue, and cling to outdated assumptions.

The clearest takeaway?
These models work best as tools that support human capabilities, not as superhuman replacements.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
TechAmerica.ai Staff TechAmerica.ai’s editorial team, consisting of expert editors, writers, and researchers, crafts accurate, clear, and valuable content focused on technology and education. We deliver in-depth technology news and analysis, with a special emphasis on founders and startup teams, covering funding trends, innovative startups, and entrepreneurial insights to empower our readers.