
All of Google’s glasses will run on the Android XR operating system, which debuted on the Samsung Galaxy XR headset in October.
Crucially, Google’s glasses will be based on the company’s Gemini AI model, which is currently a far better model than Meta AI. Gemini could prove to be Google’s biggest advantage, along with deep contextual knowledge of people who use Gmail, Google Photos, Google Docs, Tasks, Notes, and other Google products.
Google also has industry-leading services that could make Google’s glasses better: Google Translate and Google Maps, for example. At the announcement, Google demonstrated a real-time translation feature available either through on-screen captions or via audio translation through the speakers. As a user of Ray-Ban Meta’s Live Translate feature, I can tell you that captions are far better, because the audio translations often play when you or the other person are talking, so you understand even less than without the translation.
