A Forest in the Desert

— In collaboration with John Faichney.
Design, Content curation, 3D modeling, Application architecture, Web development (A-frame, Three.js)

forest.anastasia.io

The problem

In an age of ubiquitous computing and portable electronics, capturing and sharing our experiences has never been easier. Many platforms exist to facilitate this (e.g. Instagram, Facebook, Flickr), yet categorizing this content remains surprisingly tedious. We have methods such as chronological sorting, grouping by feature recognition, geotagging, user driven categorization etc, but it is still challenging to curate and present content in a deeply engaging way.

Prototyping a solution - context = meaning

One of the primary issues in presenting engaging user-generated content is the lack of context. This is why categorization techniques such as geotagging are so successful, even with traditional 2D content. As more immersive content is generated, the lack of context becomes a more pressing concern eg. once the wow-factor of a VR 360 wanes, users are often left searching for meaning. Contextually relating content, both spatially and chronologically, is a possible solution that could help us create richer virtual experiences, allowing us to more seamlessly integrate 2D and immersive user generated content.

A forest in the desert is an open source prototype that aims to test this idea, by contextually relating the content generated over the design and construction of the 2017 Burning Man Temple and presenting it in VR. I chose this project because I was a design lead and had personally generated a lot of content over it’s duration. This could also be applied to other places that we have 3D representations for eg. VPS, google earth, museums etc, where users generate geo-tagged content.

Target users
Temple crew

People who were involved with the design/constuction of the project

Burning Man participants

People who experienced the installation in real life

General public

Anybody who did not physically experience the Temple art installation

Implementation

Given that this prototype was intended for as broad an audience as possible, I wanted it to be implemented a way that was both -- 1. a compelling experience in a 2d browser and in VR and; 2. VR device and controller agnostic (3DoF, 6DoF). I had never used webVR before and chose to build the prototype with A-frame, an open source webVR framework, as a browser based experience could theoretically provide increased accessibility. I also wanted to contribute to open source VR and hoped that part of my prototype code could be generalized and shared with the community. The build is a collaboration between myself and another developer John Faichney and is the first more complex web application that I have fully architected.

Constraints -- webVR as a medium

While A-frame is really incredible it is still in a fairly nascent stage, meaning limited functionality compared to fully fledged game engines like Unity. There are also issues with browser support as webVR is only supported in Firefox Nightly and Chromium on desktop and Chrome, Samsung Internet and a few other mobile browsers. The last major constraint is performance due to device capabilities and network connectivity limitations.

Constraints -- content curation

All the content for the prototype was manually generated/cured by me, inserting a heavy bias into is being presented. I tried to work against the classic 'designing for myself' problem but it was somewhat unavoidable in this case. Additionally, given the nature of the Temple installation and it's theme around mortality, I chose to omit content that was too senstive in nature.

Environment Design

The design of the environments aims to capture the ephemeral qualities of light and shadow, dust moving across the landscape, flickering light, soft voices — some of the factors that helped the installation be perceived as sacred space. The A-frame environment component was calibrated and used as a procedurally generated backdrop. All other environment assets were designed and 3d modeled in Rhino/ Blender by me.

Interaction Design

The interaction design for the prototype is intentionally simple, borrowing heavily from existing VR interaction design patterns. Text is minimized for ease of experience, however tooltips will be added in future builds. Contextual content is accessed through content markers.

Menus

The menus serve both a navigational and informational purpose, allowing a user to navigate between scenes and toggle the active model or 360. A similar nested menu UI pattern is used for both the model and 360 menus.

Markers

Markers are embedded in the model view and indicate locations where content is available. In the current build, only 360 content is supported. Future iterations will expand upon the marker component to display different types of content.

Controller input

User input is simple and intended to provide a user with selection control and allow a user to navigate around a scene. While 3DoF or 6DoF controllers are ideal, the prototype also works with WSAD/arrow controls on the keypad. A reticle is present when not in VR view.

Next steps

Onboarding, User testing, Adding support for unbuilt UI components, Adding support for 2d content, Voice overs, Multi-user support ... open to feedback and suggestions :)