The player finds themself in the hangar where they have to interact with the neon orange capsule in order to trigger the introductory video. The video explains who the player is, their role, and their mission in this experience. When the video is over the player is called to leave the VR mode and continue in AR mode.
UI The first thing the player will encounter upon opening the application is an instructions panel. The player must press the “Ok” button in order to proceed to experience the app.
Plane Detection The application detects a horizontal plane via the smartphone’s camera and automatically instantiates trash objects. Initially the player had to tap the screen in order to instantiate the trash however I decided to change it to automatic for a couple of reasons. The first one being that I truly believed this step was quite unnecessary and it was an extra task the player had to remember to do. The second is that the OK button in the UI interfered with the plane detection and I found myself while testing the app, instantiating trash while pressing OK leading to misplacement of trash.
Image Tracking The player uses the target image to generate a virtual net that will be used to pick up the virtual trash. The player must use the net to collide with trash. Upon collision, trash item disappears signifying it is now inside the net.
Initial Screenshots of the three basic functionalities of the app:
User Interface:The instructions panel. The instructions have now changed since there is no need to tap on the screen to generate the trashPlane Detection:Trash on the detected plane. The sizing and the type of trash is not representative. [Although when the player comes closer to the objects they do get larger they are still small. ]Image Tracking: The image of a real net was on the laptop screen The smartphone detected that image and generated a cube. The cube represents the virtual net that will be used to pick up trash.
Gallery B was on average the easiest to understand and interact with
Gallery A was the hardest to use mostly because there was no pathway to “walk” from position A to trash and back
Gallery C was either much liked or much disliked due to the multiple levels
Back to our main research question: Objects were “highlighted equally” as participants responded however “The sizing between items were not equal and categories of trash was too mixed up” & “I think it is natural to go grab bigger items but also they are proportionate to real life – a rubber duck isn’t going to be bigger than a cardboard box” [quoting a couple of participants]
List of Design Requirements:
-Make white placing plinths eye level
-Make trash size more realistic
-Add more trash types
-Make the whole process less time consuming
-Re-visit affordances in the design
-Differentiate plinths from trash assets stylistically
-Make sure trash has no other affordances than just picking up and placing e.g. open cardboard
-The concept of teleportation was not quite correctly understood from all participants
E.g. one participant kep picking up items without teleporting
-For iteration three keep in mind that artefact placement must be able to tell a story
-Make transition between AR and VR natural
-Make sure there is no stylistic difference between AR and VR
-Make sure there is consistency between the platforms
Notes for List of Instructions
Make sure in instructions to answer common questions:
Where to place the artefacts
How to place the artefacts
What is the goal of the experience/player
Let them know there is no correct way to place the items
How to teleport
How to pick up the items
What is the connection between AR and VR
Transition Instructions
Instructions
Fo AR part of the experience
[TBC]
For VR part of the experience
In voiceover or UI:
You have now collected all the artefacts and are ready to proceed to the next part of the experience [transition phrase]
In this part you must curate an art gallery with the types of trash that you collected previously (in the AR part). Your goal as an agent is to highlight the objects that you place equally and to make sure that your trash gallery exposition will be able to educate the rest of the people of your galaxy.
The items should be placed in the white placing pillars that are floating around you. There is no correct way of placing the items.
Teleport to the trash items by pressing the pointer button, pointing to your desired location and then release.
Grab the item by pressing the grip button, stooping, and moving your arm in a “grabbing” motion. Do not release the grip button after you grab the item.
To teleport next to the white placing pillars you must repeat the teleport action with the pointer button however this time without releasing the grip button.
It should be noted that you cannot grab multiple items at the same time
If you arrive at your desired location [next to a pillar] and wish to place the item, all you have to do is to release the grip button.
One can argue that our project was an experimentation of VR as a tool for previs for a VR project. VR has been used as a previs tool for actual films and series, such as Lion King and the Mandalorian, but it has not been used inception-ally i.e. previs in VR for VR.
Doing previs in VR was a challenge because we had to unlock the key points of distinction between a film and a VR experience [in our case a VR trailer]. For example, a big aspect of previs in films is focusing on the camera(s) placement and movements. However, in a VR experience the camera is basically the player wearing the headset. Therefore, no matter how limited in interaction an experience is, and how linear a storyline is, the movement of the headset is always random and unpredictable. Therefore, instead on focusing on where the camera is going to be placed and what pathway is going to follow, I tried designing affordances in the scene that will allow for different perspectives in the scene and hence be able to tell a story in both first person and third person point of view. I decided that this parameter was of utmost importance for previsualizing scene4 and that is why it was the first practice I experimented and tested in my previs process. It should be noted that prior to testing the perspectives I had already set up a basic scene with the sizes and proportions in place. Sizes and proportions of assets are also very important in VR and that is why I had already adjusted these prior to starting the previs process. Should I had devoted a part of my previs for sizes and proportions instead of establishing them prior to starting?
I then proceeded to testing practices that are of equal importance in both films and VR experiences such as lighting, style/color, and motion/movement of assets in the scene. I am not going to delve into detail on how I implemented that as I have another post dedicated to the whole analysis of each step of the process.
Spatial sound is a parameter that is mostly important in Virtual Reality and I left it for the end for a couple of reasons. First, sound was not available to me until much later and I did not want to leave all of the project for last minute and second because I believed that spatial sound is a flexible parameter that can be tested whenever. Was I wrong?
Motion Sickness is an aspect I tested in each step and that is why I did not add it in the order above.
Because previs in VR for a VR project is something unprecedented, there is no specific order in which one decides to test each practice. That is why when I finished the whole process and looked back I questioned whether my end result would have taken a completely different path had I chosen a different order of testing things.
What if I had not established sizes and proportions from the very beginning? What if the sizes where completely random and tested different perspectives with random proportions? Would that lead me to an epiphany that something works better in a different way? What if I had no order at all and tested everything at the same time and just iterated that test over and over again by just changing a few settings each time. For example, change sizes and proportions, and lighting, and colors, in both first person and third person point of view, and then do another iteration with different colors, different lighting settings, different sizes. Would that lead to something completely different? Additionally, assuming that I had sound from the very beginning, would the order of the coming assets and their respective motion be different from the end result that I have now? In other words, before adding sound my scene was about 3minutes long. When I had access of sound and realised that the whole soundtrack of the trailer is 3minutes long and I had to pick just a few seconds of it, I had to re adjust the speed of the coming assets. What if I knew from the beginning what my sound would be?
I guess the questions above cannot be answered at this time. More iterations may clear out some of my queries but I also believe that even though previs is a bounded framework it varies from project to project and it can definitely change a lot the meaning of a project depending on the way one decides to order and follow the framework.
Ultimately, I do believe that previs is a helpful process for VR projects. As mentioned above, even though it is a structured framework I think it offers different pathways that may lead to completely distinct end results. I also believe that previs in VR would be even more helpful for large projects and not just a three-minute trailer that was our case. In my opinion, previs in VR for VR must evolve further and designers/VR creators must start adding more practices to the VR previs framework list.
Step 1 – Creating a custom “world” for VRChat in Unity
Downloading the correct version of unity Downloading VRChat SDKs for world-building (VRChat world 3) Setting up a basic scene with VRCworld
Step 2 – Adding Assets we want to experiment with in, the scene
Step 3) Becoming familiar with UDon and adding appropriate action/interaction components and scripts to our assets that will allow us to experiment and test
Previs Practices that we decided it would be interesting to test while we are all live at runtime:
Lighting: Changing the intensity of the light in the scene
Environment / style: Changing the color temperature of the scene (i.e. warm color to cold color)
Sound: Figuring our what amount of spatial blend is the ideal
Assets: Observing sizes and proportions as players in the scene and giving feedback on them
Step 4) Testing it live with all the group
Step 5) Afterthoughts:
Bibliography:
Zachariadou, D. (2021) Tutorial with Adrika S Farhid, Ekaterini Tsoris, Inga Masliy, 1o March.
UDon README from VRChat example scene: file:///C:/Users/katri/Documents/VRChat%20world%20test/Assets/VRChat%20Examples/Readme.pdf
List of common previs practices that I will be testing in Scene4:
Different perspectives / POV / Interactivity / Navigation / Avatar role (?)
Lighting / Shadows
Movement of assets: e.g. speed, rotation, distance from player
How many of them are going to be there – maybe cut down a few assets ?
Colours in the scene – [Environment / Style]
Spatial Sound
Motion sickness
Process
Practice #1: Different Perspectives
My first trial of experimenting with different point of views included adding a variety of teleportable transparent planes in the scene so as to be able to move around and have a 360 view of the motion of the assets and the player who is in 1st person POV only.
Video #1:
Although I like the idea of having multiple planes around the scene to have a 360 ability to observe the assets, I did not really like the setup of the planes so I changed them.
The new shape of the teleportable transparent planes is a rectangular shape (if one observes them from top view) in three different levels encompassing the moving assets and the player in 1st person POV. The first level is right beneath the moving assets, the middle level is at the same height as the avatar in 1st person POV (i.e. eye level with the assets) and the third level is right above the moving assets.
Testing the new design:
I was very satisfied with this setup.
*Important Note: It should be noted that the alien avatar standing in the first person point of view in the scene is only placed there for testing purposes and will not be used in the actual VR trailer. This alien avatar was downloaded from Google Poly from the user “Poly by Google” with the title “Alien” (https://poly.google.com/view/6FrJ3_CzH8S)
Practice #2: Experimenting with Lighting
At first I decided to experiment with Directional Lighting. I tried testing directional light at runtime by changing it manually. In other words, I hit play in the scene and I opened the directional light settings and started making changes.
Video:
I was not really impressed with any change I made with directional light so I decided to keep it as it was originally and play with ambient color.
How to play with ambient color:
Change the intensity of directional light from 1 to 0 and go on to the lighting settings. Change lighting from skybox to color and try out different colors.
My first trial of ambient color experimentation at runtime:
Unfortunately although the circles (asset #1) had a very cool 2D effect the rest of the assets were showing up pitch black.
I changed it back to the original directional light / skybox settings as shown in the video above and their colors where showing up properly.
I realized the material of the rest of the assets had a metallic setting and that could be the reason the ambient color setup was not working properly on them. I therefore changed it and re-tested it.
It Worked!
I really liked this new setting of ambient color lighting. It gives a very interesting 2D effect in the moving assets so I recorded it in both 1st person and 3rd person POV to make sure i like it both ways.
Overall, I believe that this lighting setting works better for Scene 4 mostly because it matches more the style and concept of the original work.
However I was not happy about the looks of the icospheres towards the end. Their style changed a lot in the wrong direction. The interesting 3D effect and color toning they had with directional lighting completely disappeared. I therefore put an asterisk in them and decided to revisit this issue as soon as I am done with the rest of the basic practices that I am willing to test.
Practice #3: Movement of assets [speed, rotation, distance from player]
Changes:
The circles came closer to the player in 1st person POV and moved faster.
I changed the motion of the squares completely. I wanted to differentiate the movement of the squares from that of the circles but still keeping the same style.
The rotating squares are slower and more aligned.
No motion sickness was experienced with new movement.
Practice #4: Cutting down a few assets:
This is a practice that I followed/experimented with in each step of the process. Ultimately I realized that the beautiful ‘Escherian’ looking shapes I had in the original set up of scene 4 [that showed up in the end] were ‘too much’ and did not offer anything extra to the scene nor did they match completely the style that I was going for so I decided to ‘abandon’ them.
Practice #5: Colours in the scene
I really wanted the colors in the VR scene to match the style and the color palette in the original work so I therefore started changing and testing the colors in my scene having the original work in my mind.
I began by changing the looks of the circles.
I then decided that it is time to revisit the looks of the icospheres since I am in the stage of the previs process where I am trying to change the style to match the original work. As aforementioned the golden spheres really changed, stylistically, when I decided to keep the ambient color settings. Therefore I had to write a script to change the lighting settings to the original directional light settings only during the seconds that the golden icospheres appear in the scene. And so I did. The light intensity and the ambient color settings change at runtime when the golden Icospheres appear and their style once again how I wanted it to be.
If you go to 55″ in the video you will see the settings changing (right side of the screen) during runtime. :
Finally I also wanted to change the colors in the squares and the rotating squares. I wanted each of them to represent a different color combination found in the orginal work as you can see in the video below:
Practice #6: Working with spatial sound
I left this practice for the end because sound was not available to us until much later and I did not want to waste time waiting for sound to be ready and test everything last minute.
Anyhow, Rosie, the Sound Arts student in our team, composed the soundtrack for our VR trailer. She composed a three minute soundtrack that is theoretically meant to fill in all of the 8 scenes in our trailer. Therefore scene 4, my scene, would be worth a few seconds of this soundtrack in the trailer. I therefore chose 30″ from the middle of the soundtrack that also stylistically was a good match with my scene and decided to “resize” (time wise) my scene to fit in those 30 seconds.
Additionally I changed the settings of the sound in the scene to blend in and not create break in presence. I therefore decided that it is best to create two audio sources and play with them individually. One audio source meant for a player in 1st person POV and the other for a player in 3rd person POV.
When recording in VR mode, sound is not audible even though the player in VR can listen to the track perfectly. Therefore I made a desktop version walkthrough where sound is clear.
Additionally I recorded the walkthrough in 1st and 3rd person POV in VR.
Revisiting the Idea of multiple Point Of Views:
After recording the final walkthrough I decided to revisit the idea of allowing the player to choose between 1st and 3rd person point of view. Adding and testing different perspectives in the scene is important for my previs process because different perspectives represent different camera positioning in more traditional previsualisations. The most important aspect of previs is telling the story from the perspective of the camera, therefore in this case I had to test telling the story (of this scene) via different perspectives and figuring out which works best.
I decided that 3rd person point of view does not add anything to the scene and when compared to the 1st person point of view, first person works better for a variety of reasons.
To begin with the visuals are experienced better in first person especially now that lighting settings change at runtime. In the third person POV the player is able to view the floating assets in the environment for an extra time than the player in 1st person and may therefore perceive the change in lighting at runtime negatively because the rest of the assets become black, except of the golden icospheres that really “shine”.
Moreover, the scene is only a few seconds so time is of the essence. The few seconds spent teleporting all around the scene distract the player from viewing the moving assets that is the whole point.
Finally, the whole scene was initially designed in first person and therefore the asset are best viewed through that angle. Although it is interesting to watch the from above, below, and from their rear, in the end this extra viewpoint does not add to the meaning of the scene.
Maybe if the scene was from the very beginning set up in a way that encouraged movement in the scene my thoughts on adding different perspectives would be different.
Testing Motion Sickness:
This is a parameter that I constantly had in mind while designing the scene. It is a practice that cannot be proven I have tested however I did make sure to test the scene with third parties for extra reassurance.
Currently the scene can be experienced in two perspectives, 1st person POV and 3rd person POV. 1st person POV: In first person POV the visual affordances of the player is the skybox, the moving shapes coming towards them, that change color and displacement, and a small plane they are standing on that is big enough to fit the player to not make them feel that they can drop in space but small enough to not allow the player to believe they can move from their initial position. The environment includes the bare minimum assets for the successful experience of the scene in 1st person POV.
3rd person POV: In the 3rd person POV the environment includes long transparent planes. Although they do not seem rigid enough since they are very thin, transparent, and are floating in space, I do believe that the player would perceive them as walkable/teleportable (which they are – the whole point in this POV is to have the player teleport around the walkable planes and perceive the experience from a variety of angles). The rest visual affordances in this POV is the moving shape/objects that advance towards the virtual character that is standing in the 1st person POV, the transparent plane that this virtual character (not the player) is standing on, and the skybox.
False Affordances and how to ‘fix’ them:
1st person POV: There is a chance the player might think that they can touch the moving objects coming towards them but only those who pass really closely from the player e.g. the circles. 3rd person POV: One could argue that the visual affordance of the virtual character standing on the plane designed for the 1st person POV could be perceived as interactable in the experience even though the plane our virtual character is sitting on is of a different color and the character not only is immobile but also does not give any visual clues that they are “waiting” to be interacted with.
1st person POV: I am exploring different lighting that makes the shapes more 2D like and more transparent and are therefore differentiated from the rest of the environment and cannot be perceived as a touchable/interactable 3D object. 3rd person POV: I could try different designs of the non teleportable plane and non interactable virtual character. For example make the plane smaller and not transparent like the rest walkable planes and make the virtual character look two dimensional.