No products in the cart.
Meta’s collaboration with Ray-Ban and Oakley continues to drive the smart glasses market, with the latest generations focusing heavily on Artificial Intelligence (AI) integration and, for the first time, a built-in visual display. These devices are positioning themselves as a gateway to a post-smartphone world, offering hands-free utility with a focus on staying present.
Core Features and New Hardware
The latest releases, most notably the Ray-Ban Meta Display and the Ray-Ban Meta Gen 2, represent a significant leap in both functionality and ambition.
- Integrated Display (Meta Ray-Ban Display): This new flagship model is Meta’s first consumer smart glasses to feature an in-lens display. This full-color, high-resolution screen is visible only to the wearer and disappears when not in use. It allows users to check messages, preview photos, see translations, and access turn-by-turn navigation without having to pull out their phone. The display is placed slightly off-center to avoid obstructing the user’s view.
- The Meta Neural Band: The most futuristic innovation is the optional Meta Neural Band, a companion wrist unit for the Display glasses. Using surface electromyography (sEMG) technology, the wristband reads tiny electrical signals from muscles, allowing the user to control the glasses with subtle, intuitive hand movements (like pinching or wrist flicks), enabling silent interaction and text input.
- Enhanced Camera and Audio (Gen 2): The second-generation models (Gen 2) boast a significantly improved 12MP ultrawide camera, supporting 3K video recording at 30 FPS and up to 1200p at 60 FPS. They also feature an improved five-microphone array and open-ear speakers for high-quality audio capture, music, and calls, even in noisy environments.
- Improved Battery Life: Both Gen 2 and Display models have seen major battery life improvements, offering up to 6–8 hours of mixed use per charge, with the portable charging case providing multiple full recharges.
Key AI and Software Updates
The central theme across all new models is AI integration, transforming the glasses from simple recording devices into full-fledged, context-aware assistants.
- Live AI and Visual Context: The Live AI feature allows users to converse naturally with Meta AI while the glasses are actively “seeing” the world around them. For example, a user can point at an ingredient and ask for a recipe, or ask for a landmark’s history. This turns the AI into a real-time, visual assistant.
- Real-time Translation: The glasses now support live translation for conversations in multiple languages. For travelers or those crossing language barriers, this feature provides translated text directly onto the in-lens display or audio translation through the speakers.
- Hands-Free Utility: New software updates have expanded voice and gesture commands to seamlessly handle daily tasks, including:
- Calendar Integration: Connect Google and Outlook calendars to create events or check schedules using voice commands.
- “Restyle” Photos: Use Meta AI to automatically apply various artistic styles to captured photos.
- App and Media Control: Control music playback, answer video calls (using WhatsApp or Messenger), and receive private text message notifications.
Market and Accessibility
Meta is aiming to make its smart glasses a mainstream accessory. The products are available in a variety of classic styles like the Wayfarer and Headliner and offer options for prescription and Transitions lenses. The tiered pricing strategy—with the Gen 2 starting lower and the advanced Display model with the Neural Band at a higher price point—creates clear entry and premium options for consumers.
Despite the rapid technological advancements, the glasses continue to face scrutiny regarding privacy and social acceptance due to their video and audio recording capabilities in public spaces. Nevertheless, Meta’s aggressive push marks an undeniable step towards establishing wearable AI as the next major personal computing platform.






