Why Google Gemini Confused My Dad (And What It Reveals About AI Assistant UX)

by | Aug 24, 2025 | Narrative and Neural Nets

My dad should have been the perfect user for Google Gemini. He’s been using Google Assistant for nearly as long as it’s existed, and he relies on voice controls constantly because he has trouble seeing small text on a phone screen. He’s also a longtime fan of assistive technology in general. If there’s a gadget that might make his life easier, he’ll try it. So when Gemini showed up on his phone, he gave it a shot immediately. And he hated it.

He stuck it out for about a week before disabling it completely. And because I’m his daughter—and someone who works in AI—I got several phone calls about just how terrible it was.

At the time, I had also tested Gemini and found it lacking. Since I had helped Google give feedback back when Gemini was still Bard, I figured the problem was with Gemini itself. It definitely wasn’t good enough for me to use in anything I would normally use an AI for, so I figured it just wasn’t smart enough to handle the variety of commands that my dad wanted to give it.

He uses Google Assistant religiously for sending messages since he can’t see small text on a phone screen very easily. He also uses it to find quick answers and do Google searches for him when curiosity struck. And he’s an avid music fan, so he uses it to control speakers throughout his house. He complained most about its inability to handle messages, so I just thought he meant that Gemini was rewording the things he wanted to say when texting people.

Fast forward to July 2025. My dad tells me he’s decided to give Gemini another try. I’d been testing it more for other blogs, so I figured he’d like it more now. But about four days later, he shoved his phone across the kitchen table and asked me to disable Gemini for messages. As I started digging through the menus to try and do that, he explained what was going wrong.

“It keeps getting confused whenever I say the word ‘you,” he said. “It thinks I’m talking to it instead of telling it what to type.”

As someone who works with AI, this seemed really weird. My dad has been using Assistant for so long that he has strong preferences for how voice assistant apps should behave. Maybe he was just being picky? As I sat there, reading through pages of documentation trying to figure out how to messages back to Google Assistant for him, my dad kept venting. “I don’t understand why it won’t just type what I tell it to say.”

Then it hit me. “Dad, I’m going to do a little test,” I said.

Me: OK Google. Send a message to Michelle.
Gemini: Got it. What would you like it to say?
Me: Tell her I need you to do the dishes.

And it worked. Perfectly. My dad was baffled.

Here’s what was going on: My dad had become too well-trained to use voice-driven assistive tech. Over the years, he had learned to phrase things in the exact way Google Assistant liked in order to minimize the chance of failure. That meant when he wanted to send a message, he’s wait to be prompted for the body of the message and just say the body of the message verbatim. No extra context.

But Gemini doesn’t work like that. It isn’t just a voice-to-text pipeline with light logic on top. It’s an AI assistant that interprets natural language more broadly, including pronouns and implied subjects. And it will apply those pronouns to itself too. This is what makes it seem like it has a sense of self. So when my dad said something like, “I need you to do the dishes,” Gemini interpreted that as him giving it an instruction rather than something to send as a message. And my dad, having perfected his use of Google Assistant, couldn’t understand why Gemini wasn’t following the simple instructions he was giving it.

I gave him a simple fix: Start your message with “Tell them…” instead of just diving into the message itself. That little bit of context was all Gemini needed to understand his intent. And just like that, Gemini survived being deleted.

The Real Issue Wasn’t AI. It Was UX.

This ordeal ended up being a surprisingly useful case study in AI UX, user training, and communication.

My dad wasn’t confused because he didn’t understand how AI works. He was confused because he understood how Google Assistant worked a little too well. He had internalized a the previous system, and that didn’t transfer cleanly to Gemini. Worse, Google hasn’t done a great job of explaining what changed. Gemini is now the default, but users aren’t given the conceptual tools to understand what that means.

When I told my dad, “Just talk to it normally,” he thought he was talking to it normally. He was using the patterns that had served him reliably for years. From a user experience standpoint, that makes total sense.

From a design perspective, this also shows how badly things can break when users aren’t guided through change. My dad struggles with small text, so trying to read long support docs was out of the question. That’s why he handed his phone to me to fix things after trying, and failing, to do so on his own.

The UI also didn’t give him any way to understand why things were going wrong. He also was able to understand the idea that Gemini thought he was addressing it whenever he said, “You,” but he didn’t know what to actually do about that besides avoiding the word entirely. That’s not intuitive. That’s just frustrating.

A few small changes could have made his experience drastically better:

  • A brief onboarding tutorial explaining that Gemini parses full commands and understands self-referential language.
  • A toggle or option to revert to Assistant-style behavior for individual functions like messaging.
  • Error feedback that tells users what Gemini interpreted and lets them correct it.

AI systems need to do a better job of explaining what they are and how they behave. Gemini doesn’t think it’s a person, but it does have a basic sense of self so it can respond naturally to human language. Once I explained that to my dad, I showed him how to chain commands like this:

“Send a message to Michelle. Tell her, ‘I need you to do the dishes.’ And Gemini, can you also look up the best insecticide to use on pumpkins?”

He got it, and started using it, instantly.

It wasn’t the AI that needed to get smarter. It was the interface—and the messaging around it.

I’m sure I’ll be hearing more Gemini feedback from my dad soon. I’ll keep you posted.

Archives