“Looking Glass” or “How I learned to let go and love WebXR”

Glossary of Buzzwords used here today: XR (extended reality), VR (Virtual reality), Tomato(a fruit), API (Application Programming Interface — the protocols/tools used by a software to tell it how to behave), IRL (you where you are, now), AR (Augmented Reality- adding stuff to IRL)

More recent advancements in WebXR make it easier than ever to publish applications to the palm of a user’s hand. With a bit of tenacity and insight into WebXR and one of the frameworks built to work in tandem with it, you’ll be creating engaging and interactive experiences with profound accessibility.

Do you remember the last time you were asked to download a file to your phone in order to access the small tidbits of information that came along with it? That was cool, right? Typing in the name, locating the service, waiting to download the application, and after a wait finally opening it up to check what you had achieved. What if you could just open games and VR experiences in a browser, without downloading anything extra? What if all you needed was the browser you use to read those recipes for buttered toast that come along with a life story? What if all you needed was a url and a dream?

Covering all of this ground, making the user interaction easier, is one solution sought by WebVR, Three.js, A-Frame, Argon, etc. We’re all after the same thing when it comes to bringing A/V/XR to a web user: simple and more portable usability.

So what is WebXR? It’s an api that describes support for accessing augmented and virtual reality devices on WebGL in a web browser. It’s a series of protocols that allow for whatever control mechanisms you intend to program into use in the browser. I.e. Oculus, mouse, a stuffed Cheems with tracking capability and a pointer.

It doesn’t render graphics or do any of the excessive mucking about in mathematics for raycasters. That is the job of the GPU and WebGL.

Not too long ago, WebXR went by the name WebVR. And that can get a little confusing. I’ve experienced a little bit of cross pollination from multiple sources of information with the same name “WebXR” recently in my own travels. So my hope is to keep this cogent for you. The APIs I’ve been using otherwise are Three and A-frame.

Here we go.

What is Three.js? Three.js is a library/api used to create and display animated graphics in a browser using WebGL. Since Three.js is designed in Javascript and has been around for a long time in production, some small amount of coding effort is necessary in order to develop a project that will show your flair. The library is not explicitly available on every browser, either, because WebGL’s version on the browser defines its capability. But it is funded and highly maintained by Google. So that’s to be taken with a grain of salt. Nobody uses Rockmelt, after all. (Please don’t IM me if you use Rockmelt)

A-frame though? It’s another web based framework that one could use to render an environment in WebXR. Since A-frame is written in HTML, it is accepted in every browser everywhere and has no real need to be reformed at all for portability. It’s not nearly as flexible as Three.js because of the HTML.

So lastly and most importantly, what is this WebGL I keep mentioning?

WebGL is a Javascript API used to render interactive graphics within a web browser without downloading add-ons. The library allows GPU-accelerated rendering of physics and images as part of the web page’s canvas. It’s the baseline structure used to tie your hardware into the browser so that the user can party hard with that fancy new game mechanic or teach their new employees an engaging and necessary security procedure.

Each of these APIs mentioned require content creation in another environment; each of these environments are backbones for graphical interface in web format.

  1. WebXR lays the foundation for input, allowing for interaction between most devices and WebGL.
  2. WebGL runs the process on the base layer, telling your computing device how to handle what it’s being fed.
  3. Three and A-frame tell the browser what they need, and where, in order to be pretty.

So then you’re sitting here wondering “Why do I need to know two of these libraries? What gain is there for me in not just choosing one of them and going ham?”

Well, I appreciate the candor and vernacular with which you speak. So I will answer you.

A-Frame and Three.js compliment each other as well as provide separate resources.

Both:

  • Targets the web instead of a specific technology, allowing portability and consistency across platforms
  • develops in virtual reality as well as 2d and 3d
  • Rapid response time in a browser
  • Users don’t need to download anything

What separates them?

  • Literally every browser accepts A-frame because of HTML as its basis.
  • Three requires WebGL in the most recent supported matching version

But what makes using them both plus ultra:

  • Three.js can create the objects and allow A-frame to handle their life cycles and interactions to produce a positive A/V/XR experience in a small and rapid way. Each of them provides strengths inside their own language basis.

The world of VR and XR in web format hasn’t hit the mainstream yet. It’s looking daily like we may be getting a little closer. Many of us imagine the next big leap is via the Metaverse. The growth in extended/alternate/yada reality interest alone has skyrocketed ever since the artist formerly known as The Facebook Group decided to spearhead the VR movement. Meta, Apple, and Magic Leap (not an exhaustive list) are all looking to make mixed reality glasses a mainstream product, so you can soon hope to play Paintball in an open field and throw fireballs at your friends without the added risk of accelerants or the cost of paintball markers. I can’t wait to develop and 3d print a scanner similar to those in Dragon Ball Z and go live the meme, myself. These frameworks are a big part of the jumping off points for making that dream a reality.

Since these and more frameworks can grab objects and track their locations and movement, we are currently seeing advancement in passthrough (VR mixed with IRL and AR) and environment interaction without requiring the use of more than just your own physical presence.

What I expect in the short term is that most of the processes used in XR development will continue to be laid out in low level programming and funneled into JavaScript. But in the next 5 years, we could begin to see new high level languages and frameworks develop seemingly out of thin air to match the web-oriented movement of XR products.

Links to learn more:

WebXR’s Github and Readme

Three.js’s Github and Readme

A-frame’s Github and Readme

--

--

--

software developer, agile enthusiast, digital entomologist

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Announcement: HO/HOS Initial Liquidity and LP Farm Pool

Guide on PySpark DataFrame Functionality

How to add conditional formatting in one cell in a spreadsheet based on a value of another cell in…

ApeSwap & NFT Integrations!

Swarm Mainnet Release — Adami v1.1.0 is live!

What does JDK mean? Why is it important?

Deploy webserver in Docker container with Ansible

Articulate’s Got A Whole New Groove

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Christopher Caswell

Christopher Caswell

software developer, agile enthusiast, digital entomologist

More from Medium

‘Stranger Things: The Experience’ (Review)

Field Notes: Learning Hybridity at CPH:DOX

Tendrils of one landscape float, suspended, above another.

Embargoes & Exclusives: The Ins and Outs of High Stakes PR

The Ground Zero Fallacy & the Banality of Heroism