I tried out Android XR, Google’s latest attempt to take on Meta and Apple

Google Glass. Google Cardboard. Google Daydream.

The company has had its fair shot at VR and XR — there’s no doubt about that. Android XR is Google’s latest attempt at getting back in the game, and this time, the vision is entirely different.

First off, Gemini AI is at the core of the OS, ensuring the company’s AI reaches a wider range of people and use cases. And more than that, as per the name, it’s an attempt to align XR devices in the same category where it’s had success in smartphones.

Google invited me to its Mountain View, California, campus to check out Android XR for myself. The hardware was only on unreleased prototypes, but regardless of the form factor, it was all about Android XR as a software platform. To my surprise, I left very excited for what the future holds for the tech in 2025.

Samsung’s Project Moohan VR headset

Google

After putting on the Project Moohan VR headset, built by Samsung, I went through a quick procedure to detect the interpupillary distance (IPD) between my pupils to dial in the best image clarity within the goggles. Beyond that, there were no additional calibrations needed to pick it up and go.

I was immediately struck by the quality of the lens and fidelity of the passthrough camera, combined with onscreen elements. My eyes have never seen VR in such pristine and sharp form. I also felt completely safe with the perspective of my surroundings, lending even more to the realism of the experience.

At one moment, I encountered a bird in the Chrome browser that I could click to “View in 3D.” All of Google’s work with ARCore and AR search is supported natively within Android XR. I was completely taken aback by the sharpness and fidelity of the image hitting my eyes and how that interacted with the depth information of the room.

Though I still don’t know a lot about this headset, it is clearly meant to be a direct competitor to the Apple Vision Pro and aimed at securing a spot among the best VR headsets.

Demonstration using XR to Circle to Search.
Google

Walking through the demo room, I initiated Circle to Search with my fingers in the air and drew a circle around a doughnut toy that was partially obscured by a hanging plant. The obstacle didn’t phase the process and I was shown a number of places I could buy the toy. As a big fan of Circle to Search, this extra dimension really impressed me.

A recurring highlight was the multimodality of control within Android XR. Hand controls match what I’ve experienced on HarmonyOS for clicking, stretching, and moving virtual objects. A physical desk equipped with a Bluetooth keyboard and mouse showed how control could swap between input devices on the fly.

Tapping the side of the VR headset would initiate Gemini for voice commands and control. Perhaps the most interesting aspect to me was eye tracking, which had its own calibration process to start. As my eyes landed on objects, they would be highlighted, while a pinch with my relaxed index finger and thumb activated a click. Having never experienced eye-tracking control before, the potential of this feature got me very excited.

Google

Another highlight was the effective spatialization of flat footage, which was demonstrated in a few key ways. In Google Photos, pictures and videos from the library were shown with incredible depth generated by AI and, to my eyes, I had a hard time knowing that it wasn’t spatial to begin with. In a demo of Google Maps, I was taken inside a restaurant for a virtual tour using Street View. The footage was shot years ago without any sort of depth information, but as I stood in the middle of the room, I gained a felt sense of being there thanks to the newly spatialized presentation.

Moving beyond Google Glass

A woman wearing a pair of smart glasses.
Google

Android XR is designed to adapt the UI and functionality based on the hardware’s capabilities. Headsets get the full immersive experience, while smaller devices like glasses get a more focused, streamlined interface. Not only that, but lighter devices can offload processing workloads to a connected smartphone through a split compute configuration. This sends sensor data to the phone to be processed, then the pixels are streamed back to the glasses for viewing.

To that end, I was given a pair of black frame glasses, reminiscent of the frames shown off at Google I/O 2024. They fit my face comfortably and I immediately witnessed a sharp, full color image in the center of my right eye showing an event from the calendar app on a Pixel device in the room. Google Maps streamed written directions to the glasses while tilting my head downward showed a visual representation of my point on the map. As one of the first recipients of Google Glass when it launched, this really felt to me like the evolution of that vision.

XR digital elements overlaying someone as they walk down the sidewalk.
Google

Next, I fired off Project Astra, a multimodal version of Gemini that was also demonstrated at Google I/O this year. I pressed a button on the side of the monocular frames, Gemini said “hey”, and I started asking questions about objects throughout the room. At one point, Gemini told me what drink I could make with a small collection of liquor bottles sitting on the table.

Later in the demonstration, I asked Gemini for the name of the book sitting next to the bottles from earlier, and it gave me all the details. I previewed tracks from an album that I was holding in my hands. Gemini summarized a page out of a book that I pulled off the shelf. A lag in response time was notable throughout the experience, though I was reminded that this is a prototype and a work in progress.

Finally, I was given a brief demonstration of glasses with a Binocular display, resulting in a sharp, 3D image in the center of my eyes.

Driving the high-density display in the glasses was Raxium’s MicroLED technology, which features monolithically grown micro-LEDs with R, G, and B micro LEDs on the same substrate. This technology enables high brightness, efficiency, and resolution at a compact size. A projector system sits in the shoulder of the frames, with a waveguide directing light to the target area in the lenses that reflects the light into the eyes.

Giving it another shot

Image used with permission by copyright holder

All this is very exciting. But let’s not forget — this is by no means Google’s first attempt at trying to make a successful extended reality platform. Google first brought Google Glass to market little more than a decade ago, and while it grabbed attention at launch, it ultimately faced public backlash from those concerned with the privacy implications of a wearable camera embedded into the frame. Daydream VR was Google’s original attempt at virtual reality with Android smartphones, but Google discontinued that hardware in 2019, three years after it launched. I tried my best to temper my expectations with that history in mind.

But I’d be lying if Android XR didn’t get me excited. It represents a renewed commitment by the company to create an operating system that can drive headsets, glasses, and other form factors going forward. With Meta already announcing its own plans to open up its Horizon OS ecosystem, it looks like we’re about to get another explosion in XR headsets next year.

Can Google pull it off? That’s yet to be seen. But I’m optimistic based on what I’ve seen so far.






Read original article here

Denial of responsibility! Pioneer Newz is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a Comment