In the second iteration the common questions throughout the experience were:
Part A) – VR – Inside the Hangar
- Where do I have to go? (they had to go interact with the neon capsule tot rigger the intro briefing video)
- Can I not go outside the hangar?
- Do I have to take the headset off now ?
Part B) – AR – Trash Picking
- Do I move the phone or the image?
- How can I go closer to the trash?
- Why does the virtual hand get stuck?
- Why is it not following me ?!
Part C) – VR – gallery curation
[no extra questions]
Glitches/ technological challenges:
- With one participant the lighting in the room was not natural light [it was dark outside] and so image tracking was not working properly. The camera was not identifying the image the majority of times. Of course it should be noted that the lighting in the room was not very potent.
Summary of results from questionnaire:
- The AR application interface was easy to understand and to use
- Not all participants managed to pick their choice of trash in the AR application due to technological glitches
- Participants responded that there was no interruption in their focus when switching from VR to AR
- 50% of the participants were neutral towards the claim that the narrative of the story was interrupted with the switches – the other 50% claimed little to not at all interruption.
- Participants were neutral towards the notion of coherence interruption because of the switches
- Participants commented they would rather have the experience in VR impartially.
Future Considerations:
Based on the feedback of the questionnaire, in the second iteration, participants would prefer to have the experience in VR as a whole. This negates the purpose of having a cross platform experience. Of course, this specific experience was not properly performed due to Covid restrictions which means the experience was not put in proper context. A good example of contextualising our project is putting it in an outdoor digital art festival. In this scenario the VR aspect could be the main attraction of the stand and the AR part could be a past-time activity for participants waiting in the Q that could be done through their phones. The outdoors aspect of this experience is rudimental since virtual trash-picking outdoors is a more realistic simulation of trash-picking. Furthermore, natural lighting will be used and image tracking as well as object detection would work better than with indoor lighting (participant 3 “The app inevitably relies heavily on the quality of the camera and the light source, which, of course, is not always perfect.” – participant 3 was not able to generate the net in multiple phases of the AR part, and trash was not generated properly). Finally, a further addition to the digital art festival context is to make the experience multiplayer. In this case, one player could be doing the VR part and the other the AR. This way the player in VR would not anticipate trash picking, instead they would rely on the other player to go do their duty in AR. This way the participants would not be bothered by them having to take off and put back on the headset (participant 2 “takes some time to adjust in real life when taking the headset off.”).