Afterthoughts: previs in VR for VR. A limbo state of challenge and experimentation.

One can argue that our project was an experimentation of VR as a tool for previs for a VR project. VR has been used as a previs tool for actual films and series, such as Lion King and the Mandalorian, but it has not been used inception-ally i.e. previs in VR for VR.

Doing previs in VR was a challenge because we had to unlock the key points of distinction between a film and a VR experience [in our case a VR trailer]. For example, a big aspect of previs in films is focusing on the camera(s) placement and movements. However, in a VR experience the camera is basically the player wearing the headset. Therefore, no matter how limited in interaction an experience is, and how linear a storyline is, the movement of the headset is always random and unpredictable. Therefore, instead on focusing on where the camera is going to be placed and what pathway is going to follow, I tried designing affordances in the scene that will allow for different perspectives in the scene and hence be able to tell a story in both first person and third person point of view. I decided that this parameter was of utmost importance for previsualizing scene4 and that is why it was the first practice I experimented and tested in my previs process. It should be noted that prior to testing the perspectives I had already set up a basic scene with the sizes and proportions in place. Sizes and proportions of assets are also very important in VR and that is why I had already adjusted these prior to starting the previs process. Should I had devoted a part of my previs for sizes and proportions instead of establishing them prior to starting?

I then proceeded to testing practices that are of equal importance in both films and VR experiences such as lighting, style/color, and motion/movement of assets in the scene. I am not going to delve into detail on how I implemented that as I have another post dedicated to the whole analysis of each step of the process.

Spatial sound is a parameter that is mostly important in Virtual Reality and I left it for the end for a couple of reasons. First, sound was not available to me until much later and I did not want to leave all of the project for last minute and second because I believed that spatial sound is a flexible parameter that can be tested whenever. Was I wrong?

Motion Sickness is an aspect I tested in each step and that is why I did not add it in the order above.

Because previs in VR for a VR project is something unprecedented, there is no specific order in which one decides to test each practice. That is why when I finished the whole process and looked back I questioned whether my end result would have taken a completely different path had I chosen a different order of testing things.

What if I had not established sizes and proportions from the very beginning? What if the sizes where completely random and tested different perspectives with random proportions? Would that lead me to an epiphany that something works better in a different way? What if I had no order at all and tested everything at the same time and just iterated that test over and over again by just changing a few settings each time. For example, change sizes and proportions, and lighting, and colors, in both first person and third person point of view, and then do another iteration with different colors, different lighting settings, different sizes. Would that lead to something completely different? Additionally, assuming that I had sound from the very beginning, would the order of the coming assets and their respective motion be different from the end result that I have now? In other words, before adding sound my scene was about 3minutes long. When I had access of sound and realised that the whole soundtrack of the trailer is 3minutes long and I had to pick just a few seconds of it, I had to re adjust the speed of the coming assets. What if I knew from the beginning what my sound would be?

I guess the questions above cannot be answered at this time. More iterations may clear out some of my queries but I also believe that even though previs is a bounded framework it varies from project to project and it can definitely change a lot the meaning of a project depending on the way one decides to order and follow the framework.

Ultimately, I do believe that previs is a helpful process for VR projects. As mentioned above, even though it is a structured framework I think it offers different pathways that may lead to completely distinct end results. I also believe that previs in VR would be even more helpful for large projects and not just a three-minute trailer that was our case. In my opinion, previs in VR for VR must evolve further and designers/VR creators must start adding more practices to the VR previs framework list.

Leave a comment

Your email address will not be published. Required fields are marked *