Ram#1409

A Virtual Reality Story

Story

RAM #1409” is an interactive roomscale VR experience set in the near future. It explores
what would happen if the world would be hit by a devastating pandemic. Mia, a young,
ambitious researcher feels the pressure of finding a cure. She has a natural immunity, but
Lucas, her love, tested positive. Under this amount of pressure, what happens to our sense
of ethics and morality?

Note: This work was conceived two years before the COVID-19 pandemic hit. Even though
the production is highly similar to what happened in reality, it was conceived as a
speculative fictional work. In this case, reality imitated art; the work was not based on the
pandemic.

Exploring human memory

The human memory is very fluid, and flawed by nature. Sometimes, we make certain
memories more positive and rose-coloured, when we are confronted with our personal
flaws, shortcomings, or actions we are not proud of. We unconsciously try to reconcile
cognitive dissonances caused by, for example, trauma, grief, or guilt.

In RAM#1409, the flawed nature of memory is explored; the VR world and narrative
presents a character who is almost obsessively working on finding a cure for a global
pandemic. Commendable, one would say. But, we are experience this narrative as extracted
from the memory of this main character.

Through intuitive interaction, the user physically “dusts off” the layers of subjective, skewed
recollection, and gradually uncover the “raw” memory data, which is a lot less rose-coloured
than how our main character has remembered it.

Increasing VR-accessiblity

Exploring new interfaces/interaction models

This project had multiple goals. One was to explore how VR experiences can become more intuitive and accessible to non-technical users. A big part of this was to remove the obstacle of navigating through VR controllers, since this is often difficult to quickly understand. Additionally, VR interactions are often controlled by push buttons or pulling triggers on controllers. Again, to non-technical users, these interactions are quite hard to learn. As a result, technology gets in the way of the experience and storytelling.

Redirected walking

Exploring VR-locomotion and roomscale interaction in small living rooms

An ongoing challenge in Immersive experiences is the way the user navigates the environments presented in the virtual world. The way in which the user can move through VR-spaces is referred to as the “locomotion system”. Currently, the most common methods are either teleportation, in which the user points at a target destination and then “appears” there, and thumstick-locomotion or “smooth” locomotion. In the latter system, the user pushes forward on the thumbstick on the controller to virtually move. This system can easily lead to nausea, especially for novice VR-users.

Both systems require the user to interact with various buttons, sticks or triggers on the VR-controller. For users who are not familiar with VR, this can be quite overwhelming, and the relative complexity creates friction.

One of the goals of RAM#1409 was to make the experience as accessible and frictionless as possible, without the user interface or interaction model distracting the user and compromising immersion.

Instead of implementing the traditional locomotion models described, we explored a form of “redirected walking”; a natural, intuitive way to navigate a virtual environment is to simply physically walk around. However, in most cases the physical space in which the VR-experience is used is smaller than the virtual space presented. (Requiring teleportation or stick-based locomotion to navigate.)

VR-playspaces are generally at least 2×2 meters. This is also the minimum physical space Oculus requires users to have for “roomscale” experiences. However, the 2x2mtr space is still quite small, and doesn’t give the user much sense of actually walking through a space.
RAM#1409 was designed to optimize usage of this small space. The virtual world presented is much larger than 2×2, giving a sense of being in a large space. Within this visual design, objects such as furniture are placed in such a way that they quite naturally guide the user to walk in directions that fit within the physical space. The placement of virtual actors and interactive objects further helps guide the user.

Within the above elements, as well during scene-transitions, the user is subtly, and unconsciously walking and physically turning before reaching the physical boundaries of the playspace. The result is that the user has the feeling of being able to freely walk around in a very large virtual experience, while in reality, the user is physically being “redirected” to navigate a very small real-world space. This method has led to a very natural, intuitive way of engaging with the experience, without technology getting in the way.

Volumetric film

In the 2012 – 2018 phase of VR, 360-degree videos were quite prevalent in terms of experiences directed at non-gamers. While VR-videogames at the time had quite low-quality visuals, these 360-degree filmed experiences looked much more realistic. However, this visual level of richness also led to a natural expectation that the presented world could be interacted with; since it looked like “real life”, many first-time VR-users naturally expect they should be able to move and touch objects, as they would in real life.

So, 360-degree “cinematic” VR offers visual realism, but lacks natural interaction. VR-gaming offers rich interaction, but does not offer the same level of realism as filmed presentations.

Maker Avinash Changa, from WeMakeVR, believes that in coming years, we’ll see more hybrid experiences that bridge the gap between these two forms. This new form will be neither film nor game, but a new form of experience altogether.

A significant challenge in this convergence of media is how characters the player interacts with are presented. In a videogame characters would be 3D-models, (with limited realism), and in 360-video they would be filmed, but they can’t be spatially integrated in a game-engine created environment.

To address this challenge, we explored Volumetric Capture technologies, which was (and still is) a quite novel domain.

Volumetric Capturing could be used to bridge the gaps between VR-gaming (where the user can move through a world, but is visually presented with game-graphics), and Cinematic VR (where the user cannot move, but is visually presented with more photorealistic, filmed visuals.)

Volumetric capturing is still in early stages of development, even more so at the time this project was started. Additionally, the computing power required to present volumetrics is significant, and not suitable for mobile VR headsets such as the Oculus Quest. At the same time, presenting the project on Quest was a hard internal requirement, since this makes distribution to a large audience much easier. We embraced the technical limitations and the lower resolution and rendering quality both the capture and display technology available at the time offered. But, the nature of the story (imperfect recollections and distorted memories) aligned well with the current imperfect nature of volumetric technologies.

Meet the cast