Sorry, you need to enable JavaScript to visit this website.

VR concepts and the immersive web

BY Alexis Menard ON Feb 19, 2019

This article is part of a series about creating responsive VR experiences:

Today the web is an integral part of our lives - people use it to find information, to stay connected with family and friends, for entertainment via movies and games, and in many cases, the web is used extensively as part of our jobs.

Web experiences need to support different types of devices and users in different parts of the world. This means that as a web developer, you must support various screen sizes, input methods, connection speeds, and device capabilities. In addition, you must adapt your web content to all types of users to make sure they experience all of your content, whether they have disabilities or not.

Spending time to tailor your experience to suit your users is always a good return on your resource investment. Users who have a subpar experience are unlikely to come back to your website, just like they wouldn’t return to a physical store or restaurant.

When talking about immersive experiences, we can classify then into two categories: Augmented Reality and Virtual Reality. Virtual reality (VR) implies a complete immersive experience that shuts out the physical world. Augmented reality (AR) adds digital elements to a live view, often by using the camera on a smartphone or glasses. In this article, we’re going to focus on VR.

Today’s VR market already supports many diverse capabilities, for example, some systems provides room scale experiences, while others don’t; some provide two handed input methods, while others don’t. Users can be on a mobile phone, an all-in-one, a mid-range computer, or a high-end computer with discrete graphics. If you decide to create an immersive experience and you want to reach the maximum amount of users, you must take the heterogeneous market into account.

Before we get into the details, let’s quickly touch on why you would create an immersive experience on the web in the first place. Creating your experience on the web has several benefits and I’m going to lay down some of these here. The web is frictionless—it doesn’t require users to download your app from some store, switch context, wait for the download to finish, then engage with your content. By navigating to your URL and clicking on a button, users can be immersed right away. Discovery is another big benefit of the web platform. URLs are easily shareable, sites work across platforms, and web content is efficiently indexed by search engines. The web is also ephemeral, which typically means that users don’t have to care about cleaning, uninstalling, and so forth, because the browser will take care of that if the website isn’t accessed for a long time. This aspect is especially valuable in the case of immersive VR experiences, where the size tends to be bigger than usual experiences.

You can create immersive experiences on the web today with the World Wide Web Consortium (W3C) WebXR Device API allowing you to render both AR (Augmented Reality) and VR (Virtual Reality) experiences right from the page in your favorite browser. (The term XR is commonly used to indicate both AR and VR.) In this post I will cover VR, explaining how you can create responsive VR experiences for the web, allowing you to reach the maximum number of users.

A few warning words

The WebXR API is still being refined (the First Public Working Draft has just been published) so I will try my best to update this series to reflect the changes and keep these articles ever-green. Some of the pain points highlighted in this post will potentially be addressed at the WebXR Device API specification level. I’ll try to link to the relevant PRs and again update this post accordingly.

This article is by no means complete. I’m going to focus on some of the topics for creating immersive WebXR experiences, for example supporting the heterogeneous hardware ecosystem of VR devices and handling various types of inputs.

However, here are some interesting topics that I will not cover in this article, (maybe I’ll cover some of those in future blog posts):

  • Accessibility in XR: This is a topic that still needs quite a lot of research and best practices, yet it’s very important.
  • UX Design in VR: the field is still being explored and there are few resources on the internet. I recommend that you watch VR UX from my colleague Seth Schneider, a little video series on best practices to create VR experiences.
  • Optimizing the VR experience based on the device you’re running on. This is a typical problem that you can encounter in various other non-VR experiences, for example, video resolution depending on network quality and screen size, levels of details for a video game, etc.

The heterogeneous world of VR experiences



3 Degrees of Freedom (3DoF) in the context of VR means that the VR system is able to track the user’s head movement or, to put it simply, where the user is looking. This is called the pose and in a 3DoF system it contains three pieces of information: the yaw, the pitch, and the roll. 

3DoF axis (credit wikipedia)


6 Degrees of Freedom (6DoF) in the context of VR means that the VR system is able to track the user’s head movement and the user’s position in space. Typically this means that when the user moves in the physical world, they will also move in the virtual world. In VR systems, user movement tracking uses two techniques: world facing cameras installed on the Head Mounted Display (also called inside-out tracking) or optical based tracking with lighthouses (also called outside-in tracking).

6DoF axis (credit wikipedia)

Types of VR systems

Phone based

In these systems, a mobile phone is used as the rendering device. When entering into an immersive mode, you put your phone into some type of viewer. There are varieties out there ranging from very affordable cardboard-based viewers to more advanced ones such as the Samsung* GearVR or Daydream* Viewer. Some viewers come with Bluetooth® controllers that  provide users more ways to interact easily with the content.

All In One Head Mounted Display

These are new devices launched in 2018 which are comprised of a Head Mounted Display (HMD) that includes all you need to render a VR experience:  CPU, GPU, battery, screen, and sensors. Examples of these products are Lenovo* Mirage, Oculus* Go and Quest, and Vive* Focus. They all come with controllers allowing you to interact with the content. The hardware found in these devices are similar to what you would find on a high end smartphone. The benefits of these devices is three-fold: they provide comfort (you don’t have to slide your phone), better experience (the internal design is optimized for the use case), and convenience (just strap it onto your head).

PC tethered Head Mounted Displays

These systems are typically Head Mounted Displays connected to a computer. The connection can be physical with a cable or wireless (for example the HTC* Wireless Adapter). The rendering is done by the computer and pushed to the Head Mounted Display. Because it uses a computer that can have high end hardware, this product can render very rich experiences at great fidelity. These systems allows you to render 6DoF and 3DoF experiences with multiple controllers.


There are various input systems for VR out there and new ones are coming in the future. I’ll go through some popular ones here.

3DoF controllers

You find those in systems like Google* Daydream, Samsung* GearVR, Oculus* Go, and HTC* Vive Focus. Using the accelerometer and gyroscope inside the controller gives you the orientation of the device, but not its position in space. They usually include few buttons and a touchpad. Such systems typically have only one controller.

Google* Daydream 3DoF controller

6DoF controllers

Similar to 3DoF controllers, these have built-in sensors used to provide orientation, however they typically include some technology so that the VR system is able to locate them in space. Most commonly, in an inside-out system they have visual markers that the camera inside the HMD is able to locate. In an outside-in tracking system, they usually use the same optical technology that is used to locate the HMD itself.

Microsoft* Windows Mixed Reality 6DoF controllers

HTC* Vive 6DoF controllers

Types of VR experiences

When using the WebXR Device API, you have to make sure that the experience will work for many people, therefore it’s important that you think upfront what kind of experience you want to build.

These section describes examples of experiences that you could build, for various uses, such as social, entertainment, travel and enterprise.

Stationary viewpoint

Typically, this is the kind of experience to create for 360 videos or pictures. The fixed position is usually where the camera is located when the video was recorded or the picture was taken. Users are able to look around, but can’t move. Depending on what you want to show, you can overlay content on top of your 360 picture, such as legends, information boxes, or menus. You would choose this experience if the user is going to stay seated or will stand (without changing position).

Bounded VR experiences

In these experiences, users can move in their physical space, however, their physical movements are limited by the XR hardware, typically the play area that you set up in your VR system. This means that the XR system must add some user visual feedback into the headset to make sure the user is warned when a physical barrier is close (this system is known as a chaperone). Bounded experiences doesn’t mean that the virtual world is as big as the play area of the XR system. Instead, it means that you must build your experience so that the user can interact with objects that are within the bounds, but the user also has a way to move further than this, such as teleportation, where a user points where they want to go and presses a button to get there.

Decide the experience you want to create

As a developer, you begin by designing the experience you will provide to the user, because it determines many aspects of the WebXR Device API, such as inputs and reference space, among others.

The first thing to ask, is whether or not the user will move in your experience.

  • If moving is not required, such as in a 360 video, then use a stationary reference space type.
  • If moving is part of your experience, then use the bounded reference space type.

Stationary reference space

Even in a stationary scenario, you have a few variations and you want to make sure they are all  comfortable for the user. A typical discomfort would be that the user is seated, but appears to be laying on the floor or standing up in the virtual experience, which creates a disconnection between the user perception based on the real world and what they are seeing inside the HMD. The WebXR API lets you specify 3 different subtypes for the stationary reference space:

  • position-disabled: used for 360 videos and pictures because the capture point is fixed and you can’t move inside the virtual world. When you query pose data, the position values are always set to 0, while rotation values are received based on the HMD pose.
  • floor-level: used for experiences where the user is standing and looking around. Again, movements are not taken into account here. The pose data is calculated in different ways, depending on the VR system. On 3DoF systems, often the system cannot calculate the floor level, therefore it gives you an estimation (or some kind of emulation). Typically 6DoF systems give you the position inside the configured play area at the time the user entered the experience or when the user set up the bounded experience and set up the floor level.

Calibration of the floor level on HTC* Vive systems (credit: HTC)

  • eye-level: used for a seated experience. The pose data is constructed with a position that is close to the eye or head of the user. Similar to the floor-level subtype, each platform may calculate the initial position in a different manner.

In both eye-level and floor-level cases, developers should not modify the position values inside the pose data, because they contain necessary values to provide user comfort. (For example,  neck modeling is a technique used to improve the pose data information by taking into account that the HMD is on the face, but the neck is the pivot point when looking around.) Even if the pose data contains position values in a stationary reference space, the experience you’re building should not take this value into account.

Bounded reference space

In this type of experience, the user moves into the physical space to interact with the content. These experiences requires a VR system that can track the position of the user in the real world.

During setup of the VR system, the user sets up the play area and thus defines the area’s boundaries.

Calibration of play area with HTC* Vive (credit: HTC)

As a developer, you don’t need to fit your content within the play area, because VR systems have a mechanism that tells users when they are reaching the boundaries of the space (also called chaperone, virtual boundaries, or guardian). If you want your user to travel more, you can use teleportation.

Chaperone of HTC* Vive (credit: UploadVR)

When requesting a reference space with the WebXR API, the application also receives information about the the play area so that you can make sure the content is reachable.

Putting it all together

When you create native VR experiences, you typically have to write an application for each of the platforms using their respective SDKs to create specific versions optimized for that platform (for example, Windows* Mixed Reality, Oculus*, Steam* OpenVR, and Daydream*). However, when writing an application with WebXR, things gets a bit different because your application can be run on any system. Therefore, you need to think about what kind of experience you want to create, then select the minimal reference space you need to provide your experience.

This article has described several important VR concepts for designing and implementing immersive web experiences and explained how the concepts are used in WebXR. The next article in the series explains how to use WebXR and Three.JS together.

Additional reading

This article is part of a series about creating responsive VR experiences: