AI Glasses and the Next Platform Question

Almost a year ago we suggested that the ultimate AI interface may be glasses. At the time, much of the excitement was around wearable AI devices without a display or camera, like the Humane AI Pin or Rabbit R1. Novel ideas, but limited. We took a contrarian position: that true AI companions would need to see and understand the world alongside us—and that glasses were the natural form factor.

Meta’s Connect launch of the Ray-Ban “Display” glasses brings the question back into focus: could AI-enabled glasses be the successor to the smartphone?

Why Glasses Fit AI

Attempts at smart glasses go back more than a decade, and for good reason. Glasses can serve as a vantage point for systems that see and hear the world as we do, and adding a display allows them to bring context into that same field of view. Compare this to the user experience of a phone, which requires us to stop and look down instead of at the world.

The problem with earlier glasses was that the hardware wasn’t ready. Displays were clunky, batteries drained quickly, and the devices never looked like something people wanted to wear every day. That’s begun to change. Meta’s partnership with EssilorLuxottica has pushed the design toward something fashionable, while advances in optics and miniaturization have improved size, weight, and usability.

But what really drives the opportunity is AI. Suddenly what glasses capture—what’s seen and heard—can be contextualized in real time. That makes the form factor far more interesting than past attempts. Mark Zuckerberg has called glasses the “ideal form factor for personal superintelligence.” That may be aspirational today, but the logic is sound: if AI is going to act as a companion, it needs to share our sensory inputs, not sit in our pocket.

"Glasses are the ideal form factor for personal superintelligence.", Mark Zuckerberg
Mark Zuckerberg highlighting the Meta Display glasses and neural wristband

What Meta Launched (and What’s Missing)

The Ray-Ban Display introduces two new elements: a small on-lens display and a neural wristband, based on Meta’s CTRL Labs acquisition, that detects hand gestures through electrical signals. Both are interesting, but the device still feels like a first step. The display only appears in one eye, which makes interactions feel limited and, at times, awkward. Unlike Apple’s Vision Pro—which can embed images into the environment with permanence and context—the Ray-Ban Display is simply an overlay.

It’s natural to compare this to the first iPhone. That device was successful immediately, but what truly transformed it into a platform was the App Store, launched a year later. Meta hasn’t yet shown the equivalent path for glasses. But if it does, it could move from being just another participant in the mobile ecosystem to controlling the next one.

For those who want to see the demo, here’s a good review video.

The Practical Barriers

For glasses to become mainstream, several hurdles remain:

  • Battery life and heat: all-day use requires efficient hardware that won’t overheat in a lightweight frame.
  • Size and weight: the generally accepted limit for comfortable wear is about 80 grams. Anything heavier won’t make it into daily life.
  • Interface: wristbands, gaze tracking, or voice control must feel natural and reliable.
  • Style: glasses have to look good enough to wear every day.

The Industry Signals

Despite these challenges, major companies are investing heavily. In July, CNBC reported that Meta had taken a $3.5 billion stake in EssilorLuxottica, Ray-Ban’s parent company, as part of its long-term bet on eyewear. Just two months earlier, Google invested $150 million into Warby Parker to develop AI glasses. These moves show that the largest players see glasses as a credible candidate for the next computing platform.

The Privacy Question

But will the limit to adoption be technical specs or privacy? Glasses that can see and hear alongside you are powerful, but they also carry obvious risks. For the wearer, the device becomes a continuous source of data about daily life. That raises questions of ownership: who holds the recordings, how securely are they stored, and what can they be used for? If the device becomes an AI companion, it will know not just where you’ve been, but what you’ve said, who you were with, and what you were looking at.

The impact doesn’t stop with the wearer. Everyone nearby becomes part of the captured environment. Conversations, movements, and even facial expressions could be interpreted by systems running quietly in the background. Social comfort with that kind of observation is far from guaranteed.

For AI glasses to succeed, privacy will need to be designed in from the start—clear signals about when the device is active, strict boundaries on data handling, and transparent choices for users and bystanders alike. Without those safeguards, the technology risks solving one problem while creating another.

What Would Make Glasses a Platform?

For glasses to move beyond niche hardware, they need more than sensors and displays. They need an ecosystem. That means developer tools for AI-native applications, and integration with communication, productivity, and commerce. The smartphone era began with devices that were limited until software ecosystems unlocked their potential. Glasses could follow the same path.

Another step would be embedding digital content into the physical environment. Today, the Ray-Ban Display shows information as an overlay in one eye. Useful, but not transformative. Imagine instead being able to “pin” a calendar to your kitchen wall, see instructions aligned with the equipment in front of you, or collaborate on shared content anchored to a place. Permanency and context add enormous value. This is the direction Apple hinted at with the Vision Pro, and it may be where AI-enabled glasses ultimately converge.

The Road Ahead

So, could AI-enabled glasses be the next platform after the smartphone? Possibly. The Ray-Ban Display isn’t there yet, but it marks an important step.

If progress continues, five years from now glasses could be lighter, socially acceptable, and powered by AI agents that feel essential. Whether that future belongs to Meta, Apple, Google, Amazon, or a company we haven’t heard of yet is still uncertain. But the race is underway.

The Ray-Ban Display won’t replace anyone’s phone today. It isn’t “true AR,” and its interface will feel experimental for most users. But it points in the right direction: away from screens we hold, and toward computing that sees and understands the world with us. For now, it’s a product for enthusiasts. In time, it could be remembered as the beginning of something much larger.