The Razr Ultra misses the mark on this crucial feature, but Google and Samsung have the capability—and the responsibility—to improve it.


There are several aspects I appreciate about the new Razr Ultra 2025, especially the introduction of the AI Key. Initially, I thought this would unlock a range of possibilities and provide easy access to the extensive AI features included in Motorola’s Moto AI suite. Unfortunately, the execution isn’t as strong as it could be, and the AI Key often feels like an afterthought, missing out on the potential to be Motorola’s Action Button.

The shortcomings become even more obvious in an increasingly AI-driven world. While Motorola’s take on AI falls short when compared to Google or Samsung, it still represents a modest first step. That being said, the AI Key holds promise if executed correctly, and both Google and Samsung would be wise to consider a similar feature in their future devices.

Moto AI Limitations

Moto AI running on a Motorola Edge 2025

(Image credit: Nicholas Sutrich / Android Central)

Motorola is attempting to carve out its niche in AI with its Moto AI chatbot, designed to be a conversational agent—albeit a less functional one compared to Google’s Gemini or Samsung’s Bixby. It can engage in dialogue, but its capabilities on the phone are quite limited, undermining the utility of having an AI assistant/chatbot. While Moto AI is a competent option, it still falls short of what Google or Samsung offer.

On the Razr Ultra 2025, users can set the AI Key to activate the Moto AI overlay with a long press. Unfortunately, this is the only option available for a long press, which feels like a major limitation. Despite having multiple potential options, users can only choose Moto AI or nothing at all.

The Razr Ultra 2025 AI key

(Image credit: Derrek Lee / Android Central)

So I’m stuck using Moto AI. That’s manageable, but the challenge arises when I press the AI Key to activate it—I then must hit the mic button to begin talking to Moto AI. This process complicates matters, creating an extra step that feels unnecessary compared to how Gemini operates when triggered via the power button or a corner swipe; it simply starts listening automatically. This added requirement to press another button before speaking feels cumbersome.

See also  Google Pixel 5 is so Simple It Works

This issue may be linked to Moto AI analyzing the content on your display when activated, which helps it make suggestions about your next actions. However, I fail to see why it can’t allow voice interaction while concurrently performing its analysis.

Moto AI running on a Motorola Edge 2025

(Image credit: Nicholas Sutrich / Android Central)

When I engage Gemini while watching a YouTube video, for instance, it recognizes that I’m viewing a video and displays options like “Ask about video” or “Talk Live about video.” I still have the freedom to start speaking automatically when prompted. This alone highlights why the AI Key’s functionality should be expanded beyond just Motorola’s AI capabilities.