Scapes

Very much in progress. Design (interface, interaction, environments, etc) & development(Unity, C#) for an experimental music driven world... my own mini universe circa June 2017.

github.com/avmakesthings/Scapes-GVR

Creating Scapes

As a former environmental designer, I’m always thinking about how the spaces we inhabit can better support us both functionally but also experientially. VR is an incredible medium for this kind of exploration and a fairly new one for me. I figured it might be fun to share parts of this project and my thought process behind the design and development as it evolves. You can check out a brief demo of the experience on Youtube below.

Concept

It sounds somewhat trite but this project started with a song. Well an ambient electronic track that I wrote while commuting to Google I/O this year. I'm fascinated with spaces for listening to music - eg. Japanese listening cafes - and thought it would be an interesting idea to explore creating some of these as virtual environments.

What device?

A huge question when designing and developing in VR right now! In the interests of making (at least parts) of the project as accessible as possible I figured I’d go with Cardboard for a basic version and Vive for a more complex build, starting with the former because I hadn't built a Cardboard app before.

How I make things in VR…

Everyone I’ve spoken to does it slightly differently, based on their area of expertise. My process is definitely informed by my architectural education and practice. I initially think about the core functionality I want to provide and how that could manifest in form. I think think about the qualities of space, how a user interacts with it and the ... affordances, perception ... the phenomenological characteristics. I look for precedents and find things that exist that support or could influence what I’m trying to do, like the work of James Turrell below.

Design & Production Tools

Similar to architectural design, I typically start with some quick sketches and move into Rhinoceros 3D or Blender to 3D model. One of the many things I love about Unity is how easy it can be to rapidly prototype, switching between Sketch or 3D modeling software for asset generation and the Unity editor/c# for building functionality. For a highly conceptual, exploratory project like this I start each day with a list of tasks that I systematically work through (like Asana with no oversight). This is pretty helpful with Unity because it’s a crazy wormhole and if you’re not careful you can blink and it’s 4am and you have half a vertex shader that doesn’t work instead of a functioning menu.

Environments & experiences

One of the primary goals of the project is to create spaces for experiencing music. Visual as well as audio feedback is an important part of allowing a user to feel immersed in the audio. I’m testing different approaches to better understand the experience of visualized audio in VR and have described a few below.

There are endless ways to visualize audio, signal-like waves with the frequency or amplitude highlighted or more abstractly using patterns of form or light. The home environment is simple and allows a user time to acclimate to the overall experience. It’s really a space for listening and relies heavily on real world tropes like the scenic view and the seated person, indicating pause. An audio visualizer is used to give presence to the audio vs a purely sound based gesture like increased volume.

In contrast, in the sphere scene, I tried to create an audio reactive immersive environment that was completely abstract. I’m still exploring how to add a menu or navigation into something so abstract without it seeming jarring or detracting from the experience. Color and repetition are used here in a Kandinsky-esque fashion to create rhythm.

Interaction & interfaces

There is so much to think about when designing interfaces for VR applications. Should it be diegetic or spatial? Object or screen based? I thought it might be nice to experiment with a few different options. The icon based spatial menu shown below is intended to provide a basic user interface through a nested menu structure. At this preliminary stage, not much interface support is required because the application simply doesn’t have much functionality. All of this needs to be tested and may or may not remain this way consequently.

Narration can be used to provide information and direction within the environment, perhaps less so than other interfaces but highly effective in game design. Where should the voice come from? What should it sound like? Should it be disembodied or not? I started writing some of the voice recordings and have shared part of the recording for the intro scene here.

//TO-DO

So many things. I’m currently working on a point cloud audio reactive visualization and will post progress over the next few weeks. I’m also going to start working on integrating binaural audio and will switch over to Vive for more nuanced interaction. I’ve been thinking a lot about the music creation aspect of the project too and have some interesting ideas to test out. Stay tuned!