Apr-Jun 2017 //
Design / Unity Development / Audio
Scapes is an experimental world for experiencing music in VR, designed and built for Google Cardboard.
As a former environmental designer, I’m always thinking about how the spaces we inhabit can better support us both functionally but also experientially. VR is an incredible medium for this kind of exploration and a fairly new one for me. Scapes is a project that explores some of these concepts through Google Cardboard. You can check out a brief demo of the experience on Youtube below.
It sounds somewhat trite but this project started with an ambient electronic song that I wrote. I'm fascinated with spaces for listening to music - eg. Japanese listening cafes - and thought it would be an interesting idea to explore creating some of these as virtual environments.
A huge question when designing and developing in VR! In the interests of making (at least parts) of the project as accessible as possible I figured I’d go with Cardboard for ease of development and accessibility.
How I make things in VR…
Everyone I’ve spoken to does it slightly differently, based on their area of expertise. My process is definitely informed by my architectural education and practice. I initially think about the core functionality I want to provide and how that could manifest in form. I think think about the qualities of space, how a user interacts with it and the functionality that needs to be provided. I look for precedents and find things that exist that support or could influence what I’m trying to do, like the work of James Turrell below.
Design & Production Tools
Similar to architectural design, I typically start with some quick sketches and move into Rhinoceros 3D or Blender to 3D model. One of the many things I love about Unity is how easy it can be to rapidly prototype, switching between Sketch or 3D modeling software for asset generation and the Unity editor/c# for building functionality. For a highly conceptual, exploratory project like this I start each day with a list of tasks that I systematically work through (like Asana with no oversight).
Environments & experiences
One of the primary goals of the project is to create spaces for experiencing music. Visual as well as audio feedback is an important part of allowing a user to feel immersed in the audio. I’m testing different approaches to better understand the experience of visualized audio in VR and have described a few below.
There are endless ways to visualize audio, signal-like waves with the frequency or amplitude highlighted or more abstractly using patterns of form or light. The home environment is simple and allows a user time to acclimate to the overall experience. It’s really a space for listening and relies heavily on real world tropes like the scenic view and the seated person, indicating pause. An audio visualizer is used to give presence to the audio vs a purely sound based gesture like increased volume.
In contrast, in the sphere scene, I tried to create an audio reactive immersive environment that was completely abstract. I’m still exploring how to add a menu or navigation into something so abstract without it seeming jarring or detracting from the experience. Color and repetition are used here in a Kandinsky-esque fashion to create rhythm.
Interaction & interfaces
There is so much to think about when designing interfaces for VR applications. Should it be diegetic or spatial? Object or screen based? I thought it might be nice to experiment with a few different options. The icon based spatial menu shown below is intended to provide a basic user interface through a nested menu structure. At this preliminary stage, not much interface support is required because the application simply doesn’t have much functionality. All of this needs to be tested and may or may not remain this way consequently.
Narration can be used to provide information and direction within the environment, perhaps less so than other interfaces but highly effective in game design. Where should the voice come from? What should it sound like? Should it be disembodied or not? I started writing some of the voice recordings and have shared part of the recording for the intro scene here.
So many things. While I'm not currently working on this project, I'm planning on integrating binaural audio possibly building a Vive version for more 6DoF interaction. I’ve been thinking a lot about the music creation aspect of the project too and have some interesting ideas to test out. Stay tuned!