AR Archives - Technowize https://www.technowize.com/technology/ar/ Wise Word on Technology and Innovations Thu, 05 Dec 2024 09:11:08 +0000 en-US hourly 1 https://www.technowize.com/wp-content/uploads/2020/04/favicon-32x32-1.png AR Archives - Technowize https://www.technowize.com/technology/ar/ 32 32 Designed for Comfort and Compatibility—Xreal One Launch Details https://www.technowize.com/designed-for-comfort-and-compatibility-xreal-one-launch-details/ https://www.technowize.com/designed-for-comfort-and-compatibility-xreal-one-launch-details/#respond Thu, 05 Dec 2024 09:11:08 +0000 https://www.technowize.com/?p=43179 Xreal One cinematic glasses are a perfect companion for accessing a big screen on the go without having to lug a TV around, however, the AR capabilities could be better.

The post Designed for Comfort and Compatibility—Xreal One Launch Details appeared first on Technowize.

]]>
The Vision Pro is hard at work trying to revive its reputation with key software updates, but the Xreal One launch details tell us there’s another company working just as hard. The Xreal One AR glasses are the company’s latest redesign of an already impressive Xreal Air 2, bringing us another example of what wearable smart technology can do. 

With the newly designed X1 chip installed and Xreal One’s enhanced connectivity, the latest series of small glasses are all set to capture the AR market with their visual capabilities. Reviews show that the quality is perhaps still lacking and it has a long way to go before it can lead the industry, however, these 84gm devices are portable and comfortable like nothing else on the market.

Xreal One launch details

Image: Xreal One Series

Xreal One Launch Details—Glasses Never Have to Be Boring Again

The Xreal One cinematic glasses are the company’s latest offering in the AR segment, and while they don’t boast too many unique AR-centric features, they are an amazing alternative to watching a movie on your tiny phone screen. The Xreal One’s figurative bag of new features holds multiple TÜV Rheinland eye health certifications and promises an enhanced audio experience through the Bose sound program built into the device.

There are two glasses available in the newly launched series, the Xreal One and Xreal One Pro, and they represent the “most advanced consumer AR glasses on the market today,” according to Chi Xu, CEO and Co-founder of Xreal.

It All Begins with a Single Chip, the X1

The Xreal One AR glasses are built on the “in-house designed X1 independent spatial computing co-processor,” which essentially handles all the computing needs of the glasses within the device itself, instead of relying on the smartphone to support the majority of the processing. The custom chip is said to be the result of over three years of R&D, which has resulted in the “world’s first customized silicon designed specifically for OST AR glasses.”

The announcement for the Xreal One launch details how the glasses can be connected to any device with video-out over USB-C. With the press of a button, it can allow you to view the device’s content laid out in a stable and expansive virtual screen.

Thanks to the X1 chip, users can access a low latency 3DoF (degrees-of-freedom) spatial screen that guarantees a clean output every time. The X1 also supports a low M2P latency of only 3ms at 120Hz, which is far more impressive than the 20ms on the Air 2. The low latency guarantees uninterrupted experiences, especially with the AR holographic visuals from developers who build with XREAL’s SDK.

Xreal One cinematic glasses

Image: Xreal One series

Xreal One Display Features—Seeing is Believing

The Xreal is not overambitious and doesn’t claim too many unrealistic capabilities. It does one thing and it does it well, which is to project your smart device content into the virtual space on a much bigger screen. The Xreal One cinematic glasses offer a 1080p Full HD viewing experience for each eye and reach an impressive 50-degree field-of-view (FOV). The Xreal One Pro takes it one step further with the flat-prism lens to manage a 57-degree FOV.

For the non-Pro and Pro models, these glasses have a 120Hz refresh rate and peak brightness of 600 and 700 nits, respectively. They also have electrochromic dimming capabilities to darken or lighten the lens at will. 

If you want to listen to audio via the headset, the Sound by Bose program is available to enhance the experience to optimal levels. 

Adjusting for Comfort with the Xreal One AR glasses

The normal range for interpupillary distance (IPD) is 50-57mm in adults and the Xreal One Pro comes in two sizes for the different ranges to allow customers to select the horizontal IPD size that’s best for their vision. The Xreal One just offers one size but it has support for software-based IPD adjustments. These glasses also have three-level temple adjustments for the right vertical IPD fit and can be combined with prescription inserts for those who need additional support.

“Our world-leading low latency produces a super stable spatial display that’s surprising everyone who sees it. Plus, with the new TUV Rheinland Eye Comfort (5-Star) certification, you can expect a high level of eye comfort and safety from your new XREAL glasses. With these results, we’re now at the point where AR glasses’ spatial screens can truly replace physical monitors all day long.”

The Xreal One glasses are lightweight and customizable, with an interchangeable front frame that can let you switch up the look. The AR wearables are also better designed for comfort in terms of weight distribution, so whether you use them for hours of work or hours of gaming, you should still have a good time. You also have the ability to swivel your head around while the screen remains anchored and stable, making the device even more impressive.

The Xreal glasses flaunt some new buttons that allow you to control the settings from the frame rather than having to turn to the connected device, which is a welcome change.

Xreal One AR glasses

Image: Xreal One series

We Cannot Forget About the Xreal Eye Camera 

The Xreal One series’ new features also include a detachable camera accessory for those who want to be able to record with the glasses. The 12MP modular camera sits below the nose bridge and captures both photos and videos from the wearer’s perspective, with HR recording at up to 1080fps. While the AI functionalities are limited for now, Xreal Eye promises multimodal AI capabilities with a future update. 

Combining the glasses and the camera with the Xreal Beam Pro mobile device opens up the door for some interesting combinations of the real world with augmented reality holographic displays. 

Xreal One Series Launch Details And Pricing

The Xreal One series of AR glasses are currently available for pre-order. The Xreal One is priced at $499 USD (£449 GBP/€549 Euro) and will begin shipping to customers in mid-December. The more complex Xreal One Pro is priced at $599 USD (£549 GBP/€649 Euro) and will begin shipping in early 2025. 

The Xreal One will be available in the US, UK, France, Germany, Italy, Czech, Netherlands, China, Japan, and Korea, so if you live outside these regions, you might have to wait a little longer for Xreal to consider a launch in your area.

If you’ve been meaning to toy with AR technology, the Xreal One series is a good starting point, but at the end of the day, their main function is to provide a stable or free-moving display depending on what you need. It has some audio playing and recording capabilities that make it more versatile, but it is still a few years away from maximizing the full potential of AR.

The post Designed for Comfort and Compatibility—Xreal One Launch Details appeared first on Technowize.

]]>
https://www.technowize.com/designed-for-comfort-and-compatibility-xreal-one-launch-details/feed/ 0
The Snap AR Spectacles Invite Developers to Innovate https://www.technowize.com/the-snap-ar-spectacles-invite-developers-to-innovate/ https://www.technowize.com/the-snap-ar-spectacles-invite-developers-to-innovate/#respond Wed, 18 Sep 2024 09:00:21 +0000 https://www.technowize.com/?p=42621 The Snap Spectacles subscription will cost developers $99 per month and developers need to take a minimum one-year plan in order to be able to build for the Snap Spectacles.

The post The Snap AR Spectacles Invite Developers to Innovate appeared first on Technowize.

]]>
Snap just revealed its fifth generation AR Spectacles and despite its bulky size, the device makes a solid case for AR technology. The standalone AR glasses were revealed during Snap’s annual Partner Summit in Los Angeles held recently, and it showcased how committed the company was to championing the cause of augmented reality. These fifth-gen Spectacles are designed to work well both indoors and outdoors and they promise a well-integrated shift between the real world and the visuals generated by the device. 

If you’re excited to give the glasses a whirl, however, you should know that it is set for a limited release through the Spectacles Developer Program account. The glasses will be exclusively available to developers who can expand on the range of services the AR glasses can provide to future customers. 

Standalone AR glasses

Image: Snap

The Snap AR Spectacles Are a Healthy Step Forward for Augmented Reality

The evolution of Snap’s Spectacles has been quite astonishing, and while other brands have slowed down their work on similar devices, Snap remains undeterred. The fifth-gen Spectacles are powered by SnapOS and work as standalone devices that you can wear on the go. At a glance, the gadget doesn’t look quite like regular glasses the way the Ray-Ban Meta glasses do, but we prefer it this way. It is better when smart glasses let users around them know they are more than just a pair of prescription lenses. Discreet smart technology is quite uncomfortable to be around. 

Weighing 226 grams, the lightweight glasses are equipped with four separate cameras that can integrate the details of the environment seamlessly into the Snap Spatial Engine. This means that the device supports hand trancing just like the Apple Vision Pro, opening up a world of possibilities when it comes to designing apps and functionalities for the wearable device.

The Snap AR Spectacles have a see-through display that allows the user to perceive the world around them naturally rather than have the lens translate the details through a fuzzy camera. This guarantees higher-quality visuals when the Optical engine is paired with a 46-degree diagonal field of view with a 37-pixel-per-degree resolution, allowing the user to move about comfortably with the glasses on. Verge’s hands-on review criticizes the FOV for being quite restrictive, which is a fair complaint to have about Snap’s latest offering. The Spectacles also develop a tint while outdoors to ensure that the generated visuals are just as vibrant, regardless of the available light. 

The Snap Spectacles’ next stage of evolution involves Liquid Crystal on Silicon (LCoS) micro-projectors for the images that the device generates. The explanation on the Snap blog states, “Our waveguides make it possible to see the images created by the LCoS projector, without the need for lengthy calibrations or custom fittings. Each advanced waveguide has billions of nanostructures that move light into your field of view to combine Snap OS with the real world.” 

The Standalone AR glasses Are Powered by Snap OS

Along with the Snap AR Spectacles, the company also introduced us to the OS system that has been designed from the ground up. The OS relies on a dual system-on-a-chip architecture that incorporates two processors from Qualcomm for all its fast-paced computing needs. The device also has titanium vapor chambers in each temple to support heat dissipation, preventing any threat of overheating while responding to your commands.

The Spectacles guarantees up to 45 minutes of continuous standalone runtime but the best way to use the device continuously would be to have it plugged in. Snap’s AR Spectacles are capable of interpreting voice commands when you want to navigate through the device’s menu without flailing around. The two infrared sensors on the device also do a decent job of tracking hand movement, so if you prefer hand gestures to interact with the device, you have the feature at your disposal. 

Fifth-gen Spectacles

Snap Spectacles’ Subscription Plan for Developers

The Snap AR Spectacles have been developed for developers who can ultimately make the device a functional and marketable piece of equipment. The fifth-gen spectacles are priced at $99 per month and Snap expects developers to rent the glasses for at least a year in order to work with its capabilities. The CEO appears certain that there is a market for these glasses and a collective of keen minds who are willing to utilize the software available to them to develop more advanced tools than what the standalone AR glasses offer on their own. 

“We want to be the most developer-friendly platform in the world and empower developers to invest in building amazing Lenses,” they state in their blog. The Lens Studio 5.0 modernizes and simplifies the process of working with TypeScript, JavaScript, and custom ML models directly in Lenses for all their processing needs. Through a partnership with OpenAI, developers will also be able to utilize cloud-hosted multimodal AI models to add to their capabilities. 

Companies like the LEGO Group and Niantic have already invested in Lens Studio and SnapOS, but the more the merrier. If developers take advantage of the Snap AR Spectacles and device native applications and integrations that people can put to use, it is likely going to enhance the appeal of the device when it’s ready for public release. 

Among the many issues users had with the Apple Vision Pro, one that should have been addressed was the limited number of apps and services that were compatible with the headset. Microsoft’s HoloLens also held great potential but they are also no longer commercially available. Snap’s standalone AR glasses have the potential to dominate a still untapped market, but they’ll have to convince more people about its many benefits for the transition to truly take shape.

The post The Snap AR Spectacles Invite Developers to Innovate appeared first on Technowize.

]]>
https://www.technowize.com/the-snap-ar-spectacles-invite-developers-to-innovate/feed/ 0
ChatGPT’s Voice Imitation Capabilities Have Been Promptly Addressed https://www.technowize.com/chatgpts-voice-imitation-capabilities-have-been-promptly-addressed/ https://www.technowize.com/chatgpts-voice-imitation-capabilities-have-been-promptly-addressed/#respond Tue, 13 Aug 2024 09:30:44 +0000 https://www.technowize.com/?p=42411 OpenAI detected an AI voice cloning issue with GPT-4o where the tool unintentionally imitated the voice of the user. The company had put checks in place to ensure the AI only uses the approved voices for its audio outputs.

The post ChatGPT’s Voice Imitation Capabilities Have Been Promptly Addressed appeared first on Technowize.

]]>
The recent discovery of ChatGPT’s voice imitation tendencies during testing has given us a glimpse of just how unsettling AI can be. OpenAI published a GPT-4o System Card, which is essentially a scorecard on key risk areas and what measures are being taken to mitigate the risk. One of the findings from the assessment was an AI voice cloning issue in the Advanced Voice Mode, where the AI was able to imitate users’ voices without permission, entirely unprompted. 

Despite how terrifying that sounds, ChatGPT’s cloned voice incident is not expected to be a major threat. The company has put in safeguards against it already, but it does bring to light the many dangers of AI.

AI voice cloning issue

Image: OpenAI

ChatGPT’s Voice Imitation Habits Are Equal Parts Interesting and Concerning

Artificial intelligence presents us with new technological possibilities every day, but these don’t come without a fair number of risks. GPT-4o is all set to be the company’s most humanized AI model with advanced multimodal capabilities that will allow users to have a more natural experience while interacting with the AI. Along with being cheaper and faster, the company also promises that it can “match GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages.” Despite the many advancements, unintentional consequences do come up and need to be monitored.

OpenAI detected the AI voice cloning issue while conducting its own investigation to identify key risk areas and mitigate them sufficiently. The key areas of risk evaluation and mitigation included ChatGPT’s unauthorized voice imitation ability, speaker identification concerns, ungrounded inference and sensitive trait attribution, and generating disallowed audio content or erotic and violent speech. The company has put checks in place to address all these issues. 

More on the AI Voice Cloning Issue

The GPT-4o voice cloning risk was explained in brief, with the OpenAI team explaining the voice generation capability, its dangers, and what was done to minimize the dangers. The AI has voice-generating capabilities that can create audio with a “human-sounding synthetic voice,” so it is possible for the tool to use an input clip to create something new. 

This is just as dangerous as it sounds, as anyone could use the recording of a person’s voice to create an entirely new audio file without permission. In this era of misinformation, the audio could spread like wildfire with no one aware of its origins. Many platforms like YouTube, Google,  and Instagram are doing what they can to combat the spread of inaccurate information and tag AI-generated content when possible, but we have a long way to go before we perfect these systems.

OpenAI found some rare instances of the AI doing this unprompted, generating an output emulating the user’s voice. To combat ChatGPT’s voice imitation tendencies, the company has added in restrictions to force the AI to use the preset voices collected from voice actors for all its output generation. They also built a standalone output classifier to detect if GPT-4o began voice cloning and strayed away to a voice that was different from their pre-approved list. If the voice doesn’t match the list, the output is blocked. 

Our system currently catches 100% of meaningful deviations from the system voice based on our internal evaluations, which include samples generated by other system voices, clips during which the model used a voice from the prompt as part of its completion, and an assortment of human samples.

—OpenAI

The measures put into place appear to be satisfactory and OpenAI claims the residual risk of unauthorized voice generation is minimal, even though the AI voice cloning issue persists as a flaw in the model. 

The ChatGPT Cloned Voice Incident Opens the Conversation on the Need for Safeguards against the Misuse of AI

GPT-4o and its voice cloning abilities came to light from OpenAI’s own report so it is good to see the company take ownership of the AI flaw and put checks in place to safeguard against it. Companies that are working on AI models need to be exceptionally thorough in checking their tools to detect potential vulnerabilities and security issues that might emerge following its release. If the GPT-4o voice cloning had remained undetected at this stage, its rollout might have led to hundreds of cases of misuse before they could put checks in place.

If ChatGPT is capable of voice imitation, it’s safe to say that other models will also get there soon enough and open the door for such misuse. While a number of companies rely on OpenAI to power their services, there is a growing catalog of models out there that could evolve with the same capabilities. Now that we know the AI has imitated its user’s voice, the general public might begin testing the limits of the safeguards to see if there is a way to work around it.

It’s also of note that the ChatGPT voice imitation issue isn’t the only problem that came to light. The AI was caught making inferences about the users without any grounds for it in the audio content, or in some cases, using audio cues to determine details like nationality. Both outcomes are tendencies we need to be wary of and are issues that the company has addressed. 

The generation of inappropriate audio context was also found to be easier than text. As a result, OpenAI has added a text transcription filter to ensure the audio generation is blocked if found to be inappropriate. It’s hard to predict every single circular route that the global population might take to get the AI to do what it wants, but OpenAI’s countermeasures are a good start. 

The post ChatGPT’s Voice Imitation Capabilities Have Been Promptly Addressed appeared first on Technowize.

]]>
https://www.technowize.com/chatgpts-voice-imitation-capabilities-have-been-promptly-addressed/feed/ 0
Are You Willing to Give Snapchat’s AI Features a Chance? https://www.technowize.com/are-you-willing-to-give-snapchats-ai-features-a-chance/ https://www.technowize.com/are-you-willing-to-give-snapchats-ai-features-a-chance/#respond Mon, 24 Jun 2024 09:51:45 +0000 https://www.technowize.com/?p=42008 These aren’t the first Snapchat AI tools to debut but they are likely to be the best received. Snapchat is pursuing its AR and AI ambitions in full swing with a new set of updates to its AI capabilities.

The post Are You Willing to Give Snapchat’s AI Features a Chance? appeared first on Technowize.

]]>
If you were thinking of deleting your Snapchat now that you no longer use the embarrassing filters you grew up with, the new Snapchat AI features might give you a reason to hold off on the decision a little longer. In a recent blog post, the tech company gave us a quick glance into the tools it is planning to introduce to the app and there are two main components to the announcement—Augmented Reality (AR) and on-device Artificial Intelligence (AI). With the app’s filters being one of its trademark experiences, it’s no surprise that we’re witnessing Snapchat’s AI-powered tools return to optimize what we can do with our own faces.

Snapchat’s AI advancements haven’t necessarily been monumental in their approach so far, but we understand that there are only so many ways a messaging app can incorporate such technology. Without a doubt, we will always circle back to Snapchat’s next advancement with its filters more than its other features.

Snapchat AI tool

Image: Snapchat

Coming Soon: New Snapchat AI Features That Give Filters A Boost

With a commitment to implore its global community to express themselves and explore the bounds of its creativity, Snapchat has been working hard on AI tools to support its ambitions. Most recently, Snapchat’s AI advancements have taken the shape of a real-time imaging model that aims to use AR technology to breathe new life and innovation into the app experience. A video of the AI shows how Snapchatters can take a simple idea, like the prompt of a “50’s sci-fi film,” to record a video that would fit right into a Star Trek rip-off. 

The company doesn’t give us any other examples of what the tool is capable of so the introduction is too brief to give us a real sense of how well it works. Still, Snapchat’s AI features are a significant milestone because they involve considerable work to optimize on-device real-time generative AI that can work faster than ever before without weighing your mobile down. This means a quick response to your demands as you rush to send a Snap back and keep your streak alive.

Snapchat AR effects

Image: Simplifying the creation of 3-dimensional assets.

Maxing Out Snapchat’s AR Effects for the Creator Community 

If you’re part of the creator community that enjoys using Lens Studio to design your own Snapchat AR effects, then the upcoming Lens Studio 5.0 release should be an interesting update. The GenAI Suite of Snapchat AI tools will allow AR creators to design custom ML models for their lenses and speed up the process of turning out the final results. Regardless of how you personally feel about Snapchat, you have to admit that the company has been a pioneer in simplifying the power of Augmented Reality for everyday users.

“Our AR products and services are driving major impact at scale today. On average, over 300 million people engage with augmented reality every single day on Snapchat. Our community plays with AR lenses billions of times per day on average, and our AR creator community has built millions of lenses using our Lens Studio software.”

—Evan Spiegel, Chief Executive Officer and Co-Founder of Snapchat, during the Q1 2024 Earnings

Bobby Murphy, Snap’s chief technology officer, told Reuters “What’s fun for us is that these tools both stretch the creative space in which people can work, but they’re also easy to use, so newcomers can build something unique very quickly.” Using Snapchat’s AI tools like the PBR material generation function, users will be able to speed up the process of bringing up meshes and 3D assets with a simple prompt, without having to spend hours structuring it themselves. The Snapchat AI features included in the Lens Studio will be supported by an AI assistant that can help individuals explore the suite of services more easily to best reflect the vision they had in mind

To further promote Snapchat’s AI-powered tools, the company has reportedly partnered with London’s National Portrait Gallery to create lenses that reflect some popular portrait styles. Using these filters, users can submit their snaps and add them to the “Living Portrait” projection wall in the museum. It’s quite a unique way to get Snapchatters to try out their filters. 

Snapchat AI advancements

Image: Snapchat

Are Snapchat AI Tools Safe to Use?

If you’re among the section of the population that looks at someone in horror to ask, “You still use Snapchat??” then the new Snapchat AI features are not targeted at you, but there is a significant section of the population that still does use the app. Either from nostalgia or to goof around with the comical filters, there are a lot of people who turn to the app and unfortunately, this includes younger kids. 

Snapchat states “We implement safeguards designed to help keep generative AI content in line with our Community Guidelines, and we expect Snapchatters to use AI responsibly. “ The company’s terms of service are very clear about their stance on the misappropriation of generative AI, but how that extends to the regulations on the creative suite remains to be seen. Every time we add AI to a platform, the question of just how much freedom is available to creators comes up due to fear of the misappropriation of these tools and services. The exact restrictions and guidelines on the creative liberties that creators can take are something we’ll see unfold over time.

For now, the company does have some privacy and security measures in place to keep it safe so the Snapchat AI features should be safe to use as is, however, the company can always be more explicit about its guidelines around the new applications of AI.

Snapchat AI tool “My AI” Has Not Been Well Received

Snaptchat’s own AI chatbot, powered by ChatGPT, landed in hot water on release last year when the UK’s Information Commissioner’s Office (ICO) launched a “preliminary investigation” into the tool. The investigation was concluded with the ICO satisfied by the company’s compliance with data protection laws, but the doubt hasn’t fully left the minds of users. When the AI chatbot was released, the app’s average U.S. App Store reviews fell to 1.67 according to TechCrunch. To get rid of the chatbot, you had to have a Snapchat Plus subscription. 

With 75 percent of one-star reviews, users were very vocal about their dislike for the AI and Snapchat’s decision to make it impossible for users to turn the chatbot off. Now was this due to privacy and security concerns? That remains unclear. Many have criticized the AI for serving no purpose and being too regulated, making it more robotic and lifeless than you’d expect from an AI pretending to be a “friend.” What we do know is that Snapchat’s AI features that stray away from improving filters have not been welcome in the past.

Snapchat’s AI advancements are commendable and from the looks of it, they have quite a following of users who still experiment with filters and their Lens Studio. The advancement of such tech may feel frivolous for a messaging app, but improving 3D rendering technology could lead us to tools that have many uses outside of the app. 

The post Are You Willing to Give Snapchat’s AI Features a Chance? appeared first on Technowize.

]]>
https://www.technowize.com/are-you-willing-to-give-snapchats-ai-features-a-chance/feed/ 0
Testing The Boundaries of Extended Reality With Gravity Jack and GigXR https://www.technowize.com/testing-the-boundaries-of-extended-reality-with-gravity-jack-and-gigxr/ https://www.technowize.com/testing-the-boundaries-of-extended-reality-with-gravity-jack-and-gigxr/#respond Sat, 02 Mar 2024 11:00:12 +0000 https://www.technowize.com/?p=41051 Augmented reality and mixed reality experiences are becoming more commonplace as talks of the metaverse increase in frequency. There are many unique uses for such tech that we might have never considered before.

The post Testing The Boundaries of Extended Reality With Gravity Jack and GigXR appeared first on Technowize.

]]>
With every passing year, we draw one step closer to a future where extended reality becomes a core part of our everyday experience. Sci-fi books and movies have done their best to redefine the myriad ways this will take place but the real game changers are the various companies that are redefining our future with their own innovative offerings. From businesses that are creating the hardware for virtual reality experiences to those that are establishing the computing foundations for the generation of such tech, we are surrounded by industry leaders who are paving the way for a tech-integrated future. 

The progress towards the extended reality technology we have today has not been an instantaneous one. Cinematographer Morton Heilig is often credited as having created the first virtual reality machine, the Sensorama, that accommodated four people and allowed them to watch a 3D video with different forms of stimulation to recreate the effects of what was being watched, such as vibrations and a wind effect, for a more well-rounded experience. Over time, 3D technology became more commonplace and we began to see it everywhere, from movie theaters to shows at amusement parks.

Many rides at these parks became more and more immersive with time, evolving from gimmicky ten-minute experiences to now allowing you to participate in the content you were watching, providing you with a 360-degree view as you moved around and shot things that appeared in front of you while you had your VR headsets on. Somewhere along the line, the overarching category of extended reality was born. 

Extended Reality

Delving into the Field of Extended Reality

It can get a little confusing to really understand what all these terms are and what the hype is really about. A good place to build up from is starting with our understanding of virtual reality. VR technology takes digital content and allows you to have an up close and personal experience with it. A pair of simple glasses that lets any light source you see turn into bokeh hearts is a silly but simple example of how VR headsets are able to add a lens that lets you see something different in front of you. Whether it’s the landscape of your favorite game or an exotic destination you’ve never been to before, VR technology can allow you to witness it on a much more immersive scale compared to just watching it on tv. Depending on the consoles you pair with the headset, you might even be able to physically interact with what you see, but not all VR headsets are built equally. 

The Google Cardboard headset was one such device, a novel idea that briefly took hold of the company at a time when everyone was crediting the Oculus Rift for reviving the VR industry. Google released a cardboard—yes cardboard—headset way back in 2014, which used a very simple cardboard structure, some magnets, lenses, and your smartphone to experience VR without any of the more complicated bells and whistles of VR technology. The company even provided free access to the device’s plans for those who wanted to build it themselves instead of purchasing the simplistic device, marking a truly magnanimous initiative from the company amidst capitalistic endeavors. Using the gyroscope feature on smartphones to support the head-tracking feature for the VR experience, the headset was a very significant moment in history when other devices were trying to break onto the market.

Then came the conversations around augmented reality (AR), which relied less on immersing you in the experience and instead tried to bring the virtual experience into your world. You might have witnessed apps that are able to project images into your physical space, mapping the region to understand the layout and then presenting an object that adapts to your environment. Snapchat had its AR moment when it allowed you to see your own virtual characters or “Bitmojis” make themselves comfortable in the space next to you. Snapchat’s AR Lens Studio allowed users to create their own lenses and filters to circulate on the platform, highlighting the versatility of what such tech can do. Ikea’s shopping app, which allowed you to view what specific furniture would look like in your space, was another example of AR technology. 

Then came the world of mixed reality (MR), which now stands as a combination of these innovative mechanisms, allowing you to interact with your apps, view movies, play games, attend meetings, manipulate digital objects, and essentially make the virtual experience a fully integrated part of your real one. We already have a few major launches for mixed reality headsets this year, whether you’re considering purchasing the Apple Vision Pro headset for daily use or the Sony XR headset designed for engineers and product designers. Now these headsets and the other upcoming launches give us a lot to look forward to in 2024 but they’re not the only players doing their part to revolutionize the field of extended reality (XR), a blanket term that covers all that is being done with such technology, including VR, AR, and MR. 

Gravity Jack

How Did We Get Here and Where Are We Going With AR? Take a Stroll With Gravity Jack

In the course of our conversation around AR, we were fortunate enough to talk to Luke Richey, Co-Founder and Chief Visionary Officer at Gravity Jack, the company that patented augmented reality technology. When asked about the integration and relevance of AR technology, there was a lot to learn from his insight. “While AR has been making its way into the spotlight for a while now, the last few years have seen a strong shift in both consumer preference for AR experiences and corporate adoption of the tech. Quite a few retailers have started integrating AR, one example being L’Oréal’s ModiFace experience, using AR tech licensed from Gravity Jack to analyze a user’s skin and suggest a customized beauty routine.”

“Beyond retail, Gravity Jack’s proprietary AR methods are licensed by several tech companies, including Samsung. Samsung utilizes our patented augmented reality tech for AR Zone, a camera enhancement with AR Emoji and Sticker capabilities, DreamGround, a live AR experience, and augmented reality virtual shopping experiences. In that same vein, T-Mobile’s Accelerator program relies on AR tech licensed from Gravity Jack to build 5G AR experiences for smart glasses, while 8thWall licenses its innovative, protected methodologies to develop its countless WebAR experiences and games.”

If these examples are any indication, companies, regardless of the industry, are finding ways to include AR services as a part of their customer experience. There is evidently significant unexplored territory, considering the fact that we are only witnessing the nascent beginnings of AR from what we can tell. 

“When we started in 2009 most people didn’t even believe that AR was possible. We were laughed out of more than one investor meeting and quite a few people called us crazy. Most of our sales process consisted of explaining what augmented reality even was before we could explain how it could be used to benefit business. Obviously, we’ve made leaps and bounds since then. The tech has become more streamlined and relevant to widely used tech features, and the introduction of LiDar sensors to the smartphone made AR accessible to the average user.”

—Luke Richey, Co-Founder and Chief Visionary Officer at Gravity Jack

In addition to the usability of AR, not only does Luke Richey believe in the future potential of AR technology, but he has great faith in its potential to create jobs as well. He firmly holds that AR has the ability to revolutionize business operations and encourage the emergence of a new set of professionals, soon to be led by experts on augmented reality, working in cross-functional teams to determine how AR can enhance the entire chain of processes that are involved in running a business. 

“One significant area where AR will drive job creation is in healthcare. Now, not just medical professionals, but anyone equipped with AR-enabled devices will have access to real-time patient data, augmented anatomy visualization, and advanced diagnostic tools. Imagine a world where parents can skip trips to the ER and sew stitches into their children’s cuts with the same precision as a doctor or nurse. AR also allows the ability for a teleconference where a doctor can oversee the stitching process if needed. Consequently, specialized roles such as AR-assisted surgeons, who utilize immersive technology for precision surgeries, and healthcare data analysts adept at interpreting AR-generated information, will become integral parts of the healthcare ecosystem.

“In the realm of manufacturing and logistics, AR-powered solutions will revolutionize processes, leading to the emergence of roles like AR-driven supply chain managers and maintenance technicians skilled in using AR for equipment diagnostics and repairs. These professionals will leverage AR interfaces to streamline operations, optimize logistics, and improve efficiency across various industrial sectors.”

Taking a Break to Tap Into the Potential of AI

Considering the emphasis placed on AR tech as a critical component of the future, the next question might be to consider how it ties into the other trend of 2024—artificial intelligence. “The integration of artificial intelligence (AI) as an innovative developer tool will undoubtedly change the game in the creation of AR experiences which will eventually develop themselves. AI-driven algorithms will be capable of generating code, optimizing user interfaces, and even dynamically adapting AR experiences based on user behavior and preferences,” says Co-Founder and Chief Visionary Officer at Gravity Jack, Luke Richey.

“However, despite AI’s untapped potential in expediting development, the need for supportive roles will remain crucial. AR experience experts and accessibility specialists will play vital roles in humanizing AI-generated experiences. These professionals will offer personalized guidance, ensuring that AI-generated AR content meets specific user needs, addresses nuanced challenges, and maintains a high standard of usability. As AI evolves, these supportive roles will adapt, overseeing AI-driven development, ensuring ethical considerations, and leveraging human insights to fine-tune AI-generated AR experiences, making sure that everything continues to be optimized towards a human-centric design. All of that to say, 2030 can’t get here soon enough.” 

And we’d have to agree, that there does seem to be quite a fascinating future in store for us as soon as we get more comfortable working with augmented reality. With all of these masterful insights into AR, you might wonder how Gravity Jack’s own projects figure into the proliferation of AR technology. The company’s upcoming AR-AI collaboration project, an immersive apocalyptic game, WarTribe of Binyamin, should have some answers for you. 

“WarTribe employs a geo-targeted questing mechanism that sends certain missions to players based on their location. On top of regular gameplay which includes real-world quests, players in areas with an unreached language will be specifically charged with translating their native languages into a trade language, while other players in the same area will be charged with verifying the translations of their counterparts. Their efforts within the game funnel data to an AI engine, training a Natural Language Processor on the world’s ‘last languages’ that currently remain untranslated by other platforms like ChatGPT and Google Translate.

“By crowd-sourcing data sets to train the AI, we circumvent a lot of the issues and expenses faced by the aforementioned platforms. Payments issued to play-to-earn players are funded with the in-game purchases (perks, weapons, etc.) of players in more affluent geo areas, creating a circular sort of ecosystem within the WarTribe economy.”

“The inspiration for this endeavor comes from an internal philosophy we’ve termed Opportunomics, and C.K. Pralahad’s theories surrounding ‘Bottom of the Wealth Pyramid.’ We want to foster new levels of global connection that include formerly marginalized groups by facilitating worldwide communication in heart languages and opportunities for financial freedom.” The future of AR appears to be in safe hands considering the innovation that is being put into place to solidify not just the importance of technology but the cultural preservation we’re seeing handled by Gravity Jack.

Extended Reality with GigXR

From Hospitals To The Military, Exploring the Limits of Extended Reality with GigXR

The groundbreaking work being done by Gravity Jack is only one example of the full potential of such technology. While we primarily look at such tech through the lens of entertainment and leisure, there are many companies hard at work, doing what they can to improve our overall quality of life. GigXR’s mixed reality technology has been simplifying holographic healthcare training at the institutes it partners with, allowing healthcare providers to get a running start at the careers they are all set to step into. With a library of holographic applications provided by the Gig Immersive Learning Platform, the service can eliminate the risks of training and the intrusive nature of learning that sometimes has to take place in the medical field, with a solution that is as novel as it sounds—holograms.

The company’s Holohuman and Holopatient allow instructors to guide students through the human anatomy and the host of conditions that affect it without having to ensure cadaver availability for every single training experience. Not only does this make learning a more repeatable experience, but also allows trainees to “assess, diagnose, and treat real-world conditions through true-to-life holographic simulations of standardized patient scenarios.” There are additional Holo Scenarios and Insight models that are further designed to break down human physiology and understand it in depth, in a way that photos and videos might not allow. It is truly a fascinating application of extended reality. 

“Clinical and educational spaces are increasingly adopting extended reality. We are at an inflection point, equipped with evidence that extended reality is effective at scale and offers certain advantages over traditional methods. Our customers, having seen these benefits, are now investing in the necessary infrastructure for extended reality. This includes hiring specialized personnel and creating optimal environments for integrating extended reality into curricula, with adoption rates increasing as a result.”

— Jared Mermey, CEO at GigXR

While the exploration into the field of traditional healthcare is fascinating enough to have us talking for hours exploring the possibilities, GigXR doesn’t limit itself to just one playing field. Understanding the full potential of mixed and extended reality, the company realized that there was room for practice and preparation even within tactical scenarios. CEO of GigXR, Jared Mermey, was quite willing to bring us up to date with some of the alternate applications of their technology. “GigXR’s collaboration with the United States Air Force in developing Tactical Casualty Combat Care (TCCC) training is done through the Small Business Innovation Research (SBIR) program. We are very proud to work with the 354th Medical Group USAF in Eielsen, Alaska, focusing on TCCC, a vital certification for all Armed Forces members requiring regular recertification. TCCC is needed by all service members with the idea being: How do you complete a mission when someone you’re on that mission with may get wounded?

“Extended reality enhances realism and introduces more varied scenarios, more accurately mimicking the unpredictability of real-world combat environments compared to traditional simulation methods. Personnel can practice critical decision-making in a variety of simulated, high-stress scenarios. It also allows scalable training without the limitations of time and space, enabling personnel, including reservists, to train from any location, removing the need for centralized training sessions. This flexibility is particularly valuable for reservists who typically train less frequently and are spread across the country.” 

The application of extended reality to scenarios that are hard to recreate is probably one of the most useful abilities of the technology. Whether it’s used in training first responders or preparing individuals for emergency situations, the technology allows room to really understand the various pressing elements of such scenarios before leaping into action.

 “I believe the future of training, transcending industries, is XR and AR. The form these technologies will take is still to be determined—whether goggles, designer sunglasses, contact lenses, or even brain chips—but the concept of digital overlays on top of the real world is inevitable,” says CEO Jared Mermey. “At GigXR, we’re not fixated on predicting the final form factor; our focus is on ensuring our platform’s flexibility will deliver the content for any prevailing technology. We will scale powerful training content for doctors, nurses, and future healthcare professionals, regardless of the device they use today and well into the future.” 

We’re excited to see what extended reality has in store for us this year and new gadget releases notwithstanding, there are likely to be more creative iterations of the technology we have seen so far. 

The post Testing The Boundaries of Extended Reality With Gravity Jack and GigXR appeared first on Technowize.

]]>
https://www.technowize.com/testing-the-boundaries-of-extended-reality-with-gravity-jack-and-gigxr/feed/ 0
Where Are Metaverse Platforms Headed? Breaking Down the Basics https://www.technowize.com/where-are-metaverse-platforms-headed-breaking-down-the-basics/ https://www.technowize.com/where-are-metaverse-platforms-headed-breaking-down-the-basics/#respond Sun, 19 Nov 2023 01:30:08 +0000 https://www.technowize.com/?p=40314 Some might say that metaverse platforms are over-ambitious pipe dreams but the economy indicates otherwise. Major investments are being made into virtual worlds and immersive virtual experiences and progress is occurring at much faster speeds than anyone could have expected.

The post Where Are Metaverse Platforms Headed? Breaking Down the Basics appeared first on Technowize.

]]>
When Facebook/Meta CEO Zuckerberg announced the creation of a metaverse platform as the future of technology, we were sincerely confused about what that meant. With metaverse platforms sporadically popping up everywhere, we can safely say that the confusion is still thriving, although the various VR (virtual reality), AR (augmented reality), and XR (extended reality) immersive experiences have allowed for an inkling of where these metaverse platforms are headed.

Where Are Metaverse Platforms Headed? Breaking Down the Basics

(Image credit – Freepik)

According to Fortune Business Insights, the global metaverse market size was valued at $234.04 billion in 2022, with a projected growth of $3,409.29 billion by 2027. Goldman Sachs’ analyst Eric Sheridan recently stated that these metaverse platforms could be a $8 trillion opportunity. These numbers make it evident that there is a real market and real resources are being invested into generating Metaverse platforms—-virtual worlds on and apart from the internet. 

Metaverse Platforms—Breaking it Down

Definitions of the metaverse are often simplistic—virtual worlds with their own separate features and experiences, built with a detailed metaverse economy. But don’t we already have that? It can be very easy to equate it to gaming worlds and their systems that let you design your avatar and have an immersive experience in the lore, economy, and interaction built into it. A game as familiar as the Sims might come to mind, where your characters live their lives and spend their virtual currency to get everything they desire. But one might argue that there’s limited free will and it is very obviously a game. Then perhaps open-world free-roam games might offer a better example, with GTA and Marvel’s Spider-Man allowing you a little more freedom. 

The parallels drawn are largely correct. Metaverse platforms are, to a large extent, arising out of the gaming formats and expanding on the extent of what is possible in these alternate worlds. According to Global Market Estimates Research & Consultants, the global metaverse in the gaming market is projected to expand from $36.81 billion in 2022 to $710.21 billion at a CAGR value of 38.2 percent from 2022 to 2027. Gaming companies are in the best position to build metaverse platforms with advancing technology. These metaverse platforms refer to tech companies that are building their own virtual spaces and creating a version of the metaverse that best reflects their perception of it.

With the incorporation of VR, AR, and other technology that is available, metaverse platforms hope to create a parallel universe of experiences where people can represent themselves and have fully immersive experiences as a part of these parallel worlds. Social VR or social virtual reality is gaining prominence as a result, where multiple users can join in and interact with each other, not just through text chats or audio interactions, but as independent players within the worlds that are created. Think of movies like Ready Player One and Tron: Legacy.

Terms That Can Help You Understand the Metaverse Better

The “Metaverse” is not the only term that might have taken you by surprise. With the Metaverse platform uprising we are witnessing today, a pretty elaborate vocabulary is necessary to talk about this concept with some amount of expertise. Terms like XR (extended reality) and digital twins do get thrown around a lot but they’re easier to understand once you have a good grasp of the metaverse itself.

Web3

The simplistic term is used to describe the future of the internet or the next version of it. According to McKinsey, “Web3 is the idea of a new, decentralized internet built on blockchains, which are distributed ledgers controlled communally by participants.” We are currently most familiar with Web2 of the internet which is dominated by large corporations with a degree of user freedom in the contribution of content. The future of the internet will largely be one dominated by digital assets and tokens, but also one that is set to put more control in the user’s hands.

VR (Virtual Reality)

VR headsets are no longer a hope for the distant future. Meta Quest Pro and Sony PlayStation VR2 are examples of headsets already available on the market, with Apple’s Vision Pro set to launch next year. These devices are meant to allow you to visualize and interact with a virtual environment that is designed with software. 

AR (Augmented Reality)

If you remember when Pokemon Go came out and the world collectively ran outside, that’s a good place to build your understanding of AR. If you’ve seen platforms that allow you to see what a couch would look like in your room or what a pair of glasses would realistically look like on your face, you’ve likely experienced AR for yourself. It allows more visual interactions between software and your immediate environment.

MR (Mixed Reality)

Mixed reality incorporates the potential of augmented reality and overlaps it with real objects in your immediate environment. Physical spaces are often mapped with computer-generated layouts layered over them to present a new reality that people can actively explore. Often paired with VR headsets, it aims for a very immersive experience. Microsoft HoloLens and Magic Leap are some of the leading projects in MR.

XR (Extended Reality) 

The term extended reality is a blanket term for all of these different facets of what is being done with tech, including VR, AR, and MR. It discusses the full power of recreating digital experiences and is essentially what metaverse platforms hope to achieve. 

Digital Twins

While the term “digital twins” sounds a little odd, it is not a new concept and has vast potential to change how we perceive and plan for things. It refers to a virtual copy of a real-life object, process, or individual, which can either imitate the actual object or allow you to make changes to visualize its effect. McKinsey provides a great example of Google Maps to be a digital product twin of a road or the earth as a whole, replicating roads and obstructions and providing real-time traffic updates that reflect what is happening around you. 

There are many uses for such tech. Within metaverse platforms, such technology could be used to provide immersive environments where a product could be visualized and its life cycle mapped or its flaws could be identified at a much faster pace. 

Metaverse Economy

If virtual worlds reflect the real one, then a metaverse economy seems like a logical next step for many. While we’re already familiar with in-game currency and the potential to buy virtual coins for virtual objects like a new avatar skin, the metaverse economy intends to build on this idea further. Digital assets and tokens have already started becoming more commonplace despite being new concepts, and NFTs have permanently changed how we value online assets. These concepts will become key to the operation of the metaverse economy.

NFTs (Non-Fungible Tokens)

In brief, NFTs are the newest iteration of what is possible with cryptocurrency and blockchain technology. They refer to digital assets, tokenized by blockchain tech with unique identification metadata that sets each apart from others. Defining how these tokens are different from cryptocurrency, Investopedia states, “Cryptocurrencies from the same blockchain are interchangeable—they are fungible. Two NFTs from the same blockchain can look identical, but they are not interchangeable.” Exchanging these digital tokens will likely be something users have to get familiar with, in order to participate in metaverse platforms.

Are We Close To Fully Functioning Metaverse Platforms?

Are We Close To Fully Functioning Metaverse Platforms?

Image credit – Roblox

A study by Pew Research Center found that 54 percent of technology experts interviewed were confident that the metaverse would be a refined, fully-built everyday immersive experience by 2040. 46 percent disagreed. While the scales could tip either way, the 2040 timeline that we are working with might be a good indicator that we aren’t very close to understanding the full extent of what is possible with these virtual worlds just yet. It is hard to define what a fully functioning metaverse platform might look like in a year from today.

Investments are being made and many forms of groundbreaking tech appear to be emerging every day but it is unlikely that metaverse platforms will become a norm for regular folk anytime soon. Yes, there are going to be aspects that leak into the real world, especially with VR devices becoming more commonplace. The Edverse Metaverse Platform is designed to redefine learning in the virtual environment and create virtual shared spaces for people to meet and collaborate. The Fortnite Metaverse gained much popularity with the 2020 virtual concert and its creative mode that allowed autonomy to design a virtual world. The Roblox Metaverse has been realized even before conversations began around metaverses. The creative liberty it offers users is what many others have aspired to recreate.

Many facets of the metaverse are growing independently as tech giants and small players alike try to develop their proprietary tools and technology to navigate these environments. With these massive investments into metaverse platforms, immersive experiences in virtual worlds will likely keep evolving in many unique ways.

The post Where Are Metaverse Platforms Headed? Breaking Down the Basics appeared first on Technowize.

]]>
https://www.technowize.com/where-are-metaverse-platforms-headed-breaking-down-the-basics/feed/ 0
Is Apple Losing Confidence as It Cuts Back on Vision Pro Headset Production? https://www.technowize.com/is-apple-losing-confidence-as-it-cuts-back-on-vision-pro-headset-production/ https://www.technowize.com/is-apple-losing-confidence-as-it-cuts-back-on-vision-pro-headset-production/#respond Tue, 04 Jul 2023 12:17:48 +0000 https://www.technowize.com/?p=39268 The $3,500 price tag traverses beyond Apple’s standard premium pricing and we can concoct a few reasons why.

The post Is Apple Losing Confidence as It Cuts Back on Vision Pro Headset Production? appeared first on Technowize.

]]>
Silicon Valley dweller Apple is rumored to have axed major production forecasts for its newly-unveiled Vision Pro headset. Pouring nearly seven years in development, Apple’s Vision Pro’s launch was flagged as the most significant product since the iPhone. The verdict seemed clear. So the rumors of cutting down Vision Pro’s production seem riveting.

Apple is acclimated to blockbuster products and such a deliberate call of pruning the size of the production begs a question: Has Apple lost confidence that the Vision Pro headset will be a success?

Vision Pro: Apple’s First Mixed Reality Headset

Apple has already announced that the spatial computing headset Vision Pro will not go up for sale until early 2024. Now we can concoct a few reasons as to why Apple may have had to scale back the production of Vision Pro. 

The $3,500 price tag traverses beyond Apple’s standard premium pricing and the reason for the mixed reality Vision Pro headset’s lofty cost is because first, it’s a first-generation product. And second, Apple has had to create custom hardware components over the stretch of 7-8 years of research and development. 

Another imperative factor that comes into play is that the complexity of the Vision Pro’s headset design contributes to the difficulties in production.   

Vision Pro

(Image Courtesy – Apple)

Industry analysts have surmised the delay as being more inclined to supply chain problems rather than the speculated Vision Pro SDK program for developers

Sources close to the company have said that Apple was preparing to make fewer than 400,000 Vision Pro units in 2024, solely through its ties with Chinese manufacturer Luxshare. But Apple has reportedly only ordered certain components to fulfill the production of 130,000 – 150,000 units. This implies that Apple has significantly cut production of Vision Pro, slimming down the internal sales target of 1 million units. 

Apple Wants Vision Pro To Appeal To Mass Audiences

Following years of missed deadlines and launch delays, apparently Apple is unhappy with the production yield of defect-free micro-OLEDs. Computing a major factor in seamless displays (and the most expensive component), the micro-OLEDs are vital to Vision Pro headsets. 

The expected $3,500 price point is also indicative of Apple’s low manufacturing yields with a higher cost of production inefficiencies. 

Meanwhile, Sony is reluctant to amp up its mixed-reality headset production without tiptoeing around how the AR and VR headset market would expand. 

Furthermore, to appeal to mass-market consumers, it has been suggested that Apple has partnered with Korean display makers Samsung and LG for its second-generation headset – a more affordable one. Apple has been insistent to incorporate micro-OLEDs only for the non-Pro headset, but may have to compromise with mini-LED exploration, in order to lower the cost of production. But this production of the affordable version of the Apple Vision Pro headset device may also be pushed back.

“Given the limited production numbers, the Vision Pro headset will be flying off the shelves by being pre-ordered by Apple’s high net worth users in the US and its loyal fans.” 

Many analysts believe that Apple will defy all odds to exceed an audience base of 20 million within five years of Vision Pro’s launch, owing to its ecosystem of loyal users. 

The post Is Apple Losing Confidence as It Cuts Back on Vision Pro Headset Production? appeared first on Technowize.

]]>
https://www.technowize.com/is-apple-losing-confidence-as-it-cuts-back-on-vision-pro-headset-production/feed/ 0
Apple Launches Developer Tools: Creating Spatial Experiences with Vision Pro SDK https://www.technowize.com/apple-launches-developer-tools-creating-spatial-experiences-with-vision-pro-sdk/ https://www.technowize.com/apple-launches-developer-tools-creating-spatial-experiences-with-vision-pro-sdk/#respond Thu, 22 Jun 2023 12:00:18 +0000 https://www.technowize.com/?p=39103 Apple Vision Pro’s spatial development will be able to take full advantage of the boundless canvas in Vision Pro to synthesize a new class of spatial computing apps.

The post Apple Launches Developer Tools: Creating Spatial Experiences with Vision Pro SDK appeared first on Technowize.

]]>
Apple has announced the availability of new software, the VisionOS software development kit for enabling Apple’s global developer community to build groundbreaking content for the Vision Pro. The Apple Vision Pro SDK will be available for at least a year before the headset goes live for sale in the US for $3,500.

Vision Pro is the world’s first spatial computer which features VisionOS and lets users interact with digital content in the most natural way possible – using their hands, eyes, and voice. 

Apple Vision Pro’s spatial development will be able to take full advantage of the boundless canvas in Vision Pro to synthesize a new class of spatial computing apps and digital content with the physical realm, seamlessly. Apple Vision Pro’s developer tools will be powered to design novel app experiences across multitudes of categories such as gaming, design, productivity, and more. 

Apple Vision Pro SDK

(Image Courtesy – Apple)

The tech giant reckons on the developers’ interest to build the hype around the system, after it didn’t receive a rave response at its unveiling at the WWDC, earlier this month.

Create Spatial Experiences With Apple Vision Pro SDK

For years of AR and VR development, content has been an ‘apple of discord’. No wonder Apple is ready to capitalize on a stocked App Store even before the system arrives early next year. 

In July, the tech giant is slated to open Apple developer labs for Vision Pro SDK in Cupertino, Munich, London, Singapore, Tokyo, and Shanghai. Apple SDK for Vision Pro will allow developers to get their hands on testing apps on the Apple Vision Pro hardware and gain the support of the in-house engineers. Vision Pro’s developer tools will also enable teams to build, iterate and test on the Apple Vision Pro.

The main blight of developers was to seldom have a moment with an extremely expensive and yet-to-be-launched headset. 

“Apple Vision Pro redefines the possibilities of a computing platform. Using powerful frameworks they already know, developers can start building visionOS apps and design all-new experiences for their users.”

The tech company believes that creating spatial experiences with Apple Vision Pro will unlock newer opportunities for developers to imagine new ways to help their users be productive, enjoy, and connect. Apple’s developer labs for Vision Pro SDK will span across a spectrum of immersive content – windows for depth and 3D content; volumes for being viewed from any angle; and spaces that can be unbounded for 3D content. 

Apple Vision Pro SDK

(Image Courtesy – Apple)

The visionOS SDK is constructed to be mounted on top of conformable operating systems by using familiar dev tools such as Xcode, RealityKit, SwiftUI, TestFlight, and ARKit. 

“Manufacturers can use AR solutions to collaborate on critical business issues by bringing interactive 3D content to life, from a single product to a whole production line. 

Just as system default settings load apps into the Shared Space, apps can use windows and volumes to display content. Inside the Full Space, infinite 3D content can be created, which opens a portal for immersive experiences. 

Some of the Apple Vision Pro SDK apps include Complete HeartX and Djay. Developer tools from the software development kit with Complete HeartX will aid medical students in real practice – using hyper-realistic 3D animations and models. Complete HeartX can visualize medical issues like ventricular fibrillation, helping them use their knowledge to solve them. Djay can be used by beginners and seasoned professionals – it transforms users with a stupendous environment to automatically respond to the mix. 

The post Apple Launches Developer Tools: Creating Spatial Experiences with Vision Pro SDK appeared first on Technowize.

]]>
https://www.technowize.com/apple-launches-developer-tools-creating-spatial-experiences-with-vision-pro-sdk/feed/ 0
Apple’s xrOS Headset’s Price Could Be $3,000 and Here’s Why https://www.technowize.com/apples-xros-headsets-price-could-be-3000-and-heres-why/ https://www.technowize.com/apples-xros-headsets-price-could-be-3000-and-heres-why/#respond Wed, 17 May 2023 08:51:49 +0000 https://www.technowize.com/?p=38673 The reasoning behind why Apple's augmented reality headset could be tipped to cost a fortune is likely because the xrOS headset is predicted to be targeted to developers and not regular consumer use.

The post Apple’s xrOS Headset’s Price Could Be $3,000 and Here’s Why appeared first on Technowize.

]]>
For the better part of the decade, people have been speculating about the debut of Apple in virtual and augmented reality headsets. Apple’s first VR/AR headset seems like a solid bet at this point, even though 2023 June’s WWDC (Worldwide Developers Conference) will unveil more details. Apple’s xrOS headset – the stylized wordmark for ‘extended reality OS’ was registered ahead of the launch

Albeit the tech giant has never validated working on a headset, Apple’s headset release rumors have been the talk of the town over the years. The latest notch up the rumor mill is that it will be called the ‘Reality Pro’, capable of both augmented reality and virtual reality experiences. Users will be able to transition between AR and VR with the help of a digital crown-style dial.

Apple xrOS headset

Apple’s xrOS headset, a ‘mixed reality’ AR/VR headset could virtually cost $3,000. This price point has been rumored more than once but has always stunned the loyal Apple family of users. The reasoning behind why Apple’s augmented reality headset could be tipped to cost a fortune is likely because the xrOS headset is predicted to be targeted to developers and not regular consumer use. 

The various components of Apple’s new headset allegedly include the operating system called xrOS, support for FaceTime calls, support for eye and hand tracking, playing games, reading titles off Apple Books, and an external battery pack that is easily fittable in one’s pocket. 

CEO Tim Cook has been envisioning AR and VR for almost a decade, promulgating its fundamental importance in bringing people together.

“Take one side of the AR/VR piece – the idea that one could overlay the physical realm with things from the digital space could greatly enhance people’s connection and communication.”

Apple xrOS Headset: VR/AR Headset Hardware Details

Where other VR/AR headsets like the PSVR 2 consist of advanced virtual reality tech, but only cost about $500, Apple’s new headset details up ahead will try to make you understand why the xrOS headset could feel like breaking the bank. 

Apple oracle and analyst Ming-Chi Kuo has surmised a few attributes on why Apple’s mixed reality headset is estimated at $3,000. 

Kuo speculates that the Apple VR/AR headset will have two chips in it with Apple’s in-house design, most probably designed by TSMC. One chip will be analogous to the M2 chip while the other chip will be a lower silicon slice catapulted at managing lesser demanding tasks. This means that the M2 chip will be shouldering the major power.  

Apple xrOS headset

He also reasons that Apple’s augmented reality headset will bring along an assembly which will be exclusive to ‘Luxshare ICT’. It is also a hint that Apple’s headset chassis is a class apart from other VR headsets. Kuo commends micro-OLED displays for the Apple headset, which had been exclusive to rival Sony. 

Apple’s new headset details could also include advanced tracking inside and outside the headset with a guesstimated 12 cameras, in contrast to Sony PSVR’s 4 cameras. With an external power supply from Goretek, all the major components of Apple’s VR/AR headset are touted as the ‘most expensive material costs’.

If Apple’s headset release rumors are to be true, the VR/AR headset will be a testbed for developers to fiddle with the real potential of what can be pushed into the paradigm of mixed reality.

“The headset can become the most worthy investment trend in the consumer electronics market if Apple’s VR headset impressions are better than expected at the release.”

The Apple Glasses could also be an addition to the tech giant’s lineup of VR/AR products, should it find a strong footing in the industry. But not before an affordable range of the Apple headset. 

Apple’s unveiling of the VR/AR headset at WWDC 2023 could not just be an exciting event but also set an example for what’s leading the frontier of virtual and augmented reality, both hardware and software side.

The post Apple’s xrOS Headset’s Price Could Be $3,000 and Here’s Why appeared first on Technowize.

]]>
https://www.technowize.com/apples-xros-headsets-price-could-be-3000-and-heres-why/feed/ 0
Xiaomi Wireless AR Smart Glass Unveiled at MWC 2023 https://www.technowize.com/xiaomi-wireless-ar-smart-glass-unveiled-at-mwc-2023/ https://www.technowize.com/xiaomi-wireless-ar-smart-glass-unveiled-at-mwc-2023/#respond Mon, 27 Feb 2023 13:58:42 +0000 https://www.technowize.com/?p=37888 Xiaomi, which remains one of the world's leading consumer electronics and smart manufacturing companies, unveiled its brand new concept technology achievement, Xiaomi Wireless AR Glass Discovery Edition, at Mobile World Congress 2023 (MWC 2023), on the very first day of the high-profile event.

The post Xiaomi Wireless AR Smart Glass Unveiled at MWC 2023 appeared first on Technowize.

]]>

Xiaomi Wireless AR Smart Glass has made its dream debut at the grand stage of the MWC 2023. For the uninitiated, MWC (Mobile World Congress) is the largest and most influential event for the connectivity ecosystem. It’s said that whether you’re a global mobile operator, device manufacturer, technology provider, vendor, content owner, or are simply interested in the future of tech, you simply can’t afford to be not there. 

The 2023 MWC, which is currently taking place in Barcelona, has jaw-dropping numbers in its kitty – ‘80K+ Attendees, 50% Director and above, 2000+ Exhibitors and 200+ Countries and Territories.’ And Xiaomi chose such a high-voltage platform to show the entire tech world that it has another ace up its sleeve – the all-new Wireless AR Smart Glass Explorer Edition. 

At the moment, Xiaomi seems to be going all guns blazing – only two days ago, it held an actual launch event for its flagship Xiaomi 13 series, something which has helped the brand enormously in gaining a massive surge in traffic. Now it appears to build on that momentum with the ‘breaking’ Xiaomi Wireless AR Smart Glass.

Let’s delve deeper with the story to discover what’s in store. Here we go!

Wireless AR Smart Glass Explorer Edition

An AR Smart Glass from Apple was on the cards, until a while ago. But now the intense work for the much-awaited device, which was already underway, has been shelved. It is believed that Xiaomi decided to tap on this great opportunity to announce such a device, which is probably first of its kind so far that would go to mass-scale productions. [Image Credit: Xiaomi]

Xiaomi Wireless AR Smart Glass: Wearable Magic? 

Xiaomi, which remains one of the world’s leading consumer electronics and smart manufacturing companies, unveiled its brand new concept technology achievement, Xiaomi Wireless AR Glass Discovery Edition, at Mobile World Congress 2023 (MWC 2023), on the very first day of the high-profile event. 

One must remember that an AR Smart Glass from Apple was on the cards, until a while ago. But now the intense work for the much-awaited device, which was already underway, has been shelved. It is believed that Xiaomi decided to tap on this great opportunity to announce such a device, which is probably first of its kind so far that would go to mass-scale productions. 

Now let’s have a close look at the major highlights in order to get a fair idea about this upcoming product. Here you go! 

  • It has been learnt that Xiaomi Wireless AR Glass Discovery Edition has been designed with individual experience in mind. That’s why instead of relying on a wired connection to a host computing device, the device is designed at just 126g and easily adopts Xiaomi-developed high-speed interconnection buses to achieve high-speed data connection from smartphone to AR glasses.
  • Having built on Snapdragon XR2 Gen 1 Platform and featuring Xiaomi’s proprietary low-latency communication link, these Snapdragon Spaces XR Developer Platform-supported glasses offer you a wireless latency of as low as 3ms1 between the smartphone device to the glasses, and a wireless connection with full link latency as low as 50ms2 which is comparable to wired solutions.
  • The AR glasses feature an extremely lightweight design that incorporates a host of lightweight materials such as magnesium-lithium alloy, carbon fiber parts, and a self-developed silicon-oxygen anode battery. Weighing at just 126g, these glasses are meant for minimizing any sort of physical burden on the user.
  • Moreover, thanks to analyzing tens of thousands of head tracking data samples, the glasses have been calibrated with precision, taking into account details such as the center of gravity, leg spacing, angle, nose rest, and other factors that contribute to a superior experience.
  • Xiaomi Wireless AR Glasses are claimed to be amongst the industry’s first to achieve a “retina-level” display. In case of high-end display concepts, there exists a “critical value” quality threshold for AR glasses. When it comes to angular resolution or PPD (pixels per degree) approaching 60, the human eye fails to distinguish granularity. But the PPD of the Xiaomi Wireless AR Glasses is 58, which is believed to be the industry-leading closest to meeting this quality specification.
  • Leveraging a free-form optical module that comprises a pair of MicroOLED screens, Xiaomi Wireless AR Glass also comes with free-form light-guiding prisms to achieve a clear picture display. These free-form prisms are capable of performing complex light refraction in a limited volume. The content displayed on the screen is reflected by three surfaces within the light-guiding prisms, resulting in a final presentation in front of the user’s eyes.
  • The AR glasses’ optical module design minimizes light loss and produces clear and bright images with a to-eye brightness of up to 1200 nit, providing a strong foundation for AR applications. Additionally, the AR glasses come equipped with electrochromic lenses that can adapt to different lighting conditions. These lenses enable a blackout mode that offers an immersive experience when viewing content, while the transparent mode produces a more vivid AR experience that blends reality and virtual elements.
  • Xiaomi Wireless AR Glass also features an innovative self-researched micro gesture interaction that enables one-handed, highly-precise pure gesture interaction. This kind of interaction showcases one of the directions that Xiaomi believes human-computer interaction will take in the future.
  • Xiaomi utilizes the joints of the user’s inner fingers as a gesture recognition area for Xiaomi Wireless AR Glasses. The directional is oriented from the second joint of the middle finger, with the second joint of the index finger representing the upward direction. When combined with the surrounding areas, this forms a four-way directional key for basic movement operations. In addition, the 12 knuckles function similarly to the Chinese nine-key input method, allowing for text input through thumb tapping in the finger area. The thumb sliding on the index finger is used to enter and exit applications. Looking forward, Xiaomi hopes to enable sliding and tapping operations through the random movement of the thumb in the palm.
  • Micro gestures on Xiaomi Wireless AR Glass enable users to perform daily app usage operations such as selecting and opening apps, swiping through pages, and exiting apps to return to the start page, all without using a smartphone for controls.
  • Xiaomi Wireless AR Glass incorporates a low power-usage AON camera that enables prolonged gesture interaction and facilitates features. Furthermore, users can also opt for conventional smartphone controls, which can be paired and used as a gesture or touchpad control.
  • The device supports a variety of large-screen applications available throughout the ecosystem, enabled by Mi Share’s application streaming capability. Popular applications like TikTok and YouTube can optimize display area usage of the glasses, essentially turning them into a portable large screen. Additionally, the AR capability allows users to place familiar apps anywhere in the viewing space and adjust their interface size via spatial gestures, thus enhancing the efficiency and overall experience of information access.
  • As the world’s largest smart home platform, Xiaomi has integrated AR to offer a unique experience to users. For instance, Xiaomi Wireless AR Glass allows users to “grab” the screen from a typical television screencast, and then continue watching it on the glasses. Furthermore, Xiaomi has extracted common operations of smart devices, enabling users to operate devices through AR scenes. For instance, when looking at a lamp, users can use spatial gestures to click a virtual button to turn that lamp on or off. Xiaomi has also integrated a spatial audio experience, allowing virtual speakers to be linked with the real environment.

Xiaomi Wireless AR Glasses Discovery Edition needs pairing with Xiaomi 13 or other Snapdragon Spaces ready devices. The glasses are available in a titanium-colored design and support as many as three sizes of nose pieces. Nearsighted users can use an attachable myopic clip for their convenience. In terms of software support, the glasses are compatible with Qualcomm Snapdragon Spaces, OpenXR, and Microsoft MRTK development framework. It has been reported that Xiaomi intends to work closely with developers to expedite the arrival of AR. 

Through the New Looking Glass

Can AR Smart Glasses take over smartphones in the distant future? In the first place, we might give it a big laugh but if we think deeply then we would realize that brushing the possibility completely aside, might not be possible for us. For, when we flip through the pages of history, we find that things which were once thought to be an integral part of our livelihood, were forced to take a back seat with the passage of time. The same can happen to smartphones, a few decades later. You never know 🙂 

Therefore, in the distant future if the significance as well as the importance of smartphones dwindle away, then AR Smart Glasses are pipped as the firm favorites to take over. Who knows if the Xiaomi Wireless AR Smart Glass would be remembered in the history of tech as the one who initiated a major paradigm shift, being way ahead of its time. 

But these are still early days. AR VR devices are yet to be embraced by users from every nook and corner of the world. People are still skeptical and the ones which are on offer, are yet to compel them in trying new experiences, in large numbers. 

However, things might just change if Xiaomi manages to strike a chord with users across the globe, with its Wireless AR Smart Glass Explorer Edition. Despite Meta failing to make a mark with its Meta Quest Pro and other such devices, there’s still ample room, which can be utilized. 

Just a couple of days ago, Sony’s PlayStation VR2 was launched officially, all over the world. The VR headset is aimed at the huge gaming community, spread across the globe. Going by the initial response, which is quite positive, it can be fairly said that Xiaomi can have high hopes, as gradually things appear to be changing.

However, we need to wait for the Wireless AR Smart Glass Explorer Edition to arrive in the market and see how it fares. It goes without saying that there’s a lot at stake and its sales along with feedback, will have a lot to decide for the entire segment altogether.

Stay tuned with us, as we get back to you soon with more exclusive updates from the world of tech. 

The post Xiaomi Wireless AR Smart Glass Unveiled at MWC 2023 appeared first on Technowize.

]]>
https://www.technowize.com/xiaomi-wireless-ar-smart-glass-unveiled-at-mwc-2023/feed/ 0