Designing layout for gallery for AB testing iteration 1

Initial Research

“Ways of seeing” – John Berger Quotes:

“The way we now see the paintings of the 1st half of the twentieth century is different from how we perceived them at the time they were created. If we discover why this is so then we will also discover something about ourselves and the situation in which we are living.”

“The process of seeing paintings is less spontaneous and natural that we tend to believe”

“A large part of seeing depends upon habit and convention

Convention of perspective: Centers everything on the eye of the beholder”

“Human eye can be only at one place at a time – it takes it visible way with it”

“The painting of the wall can only be at one place at a time like the human eye”

“Paintings become messages”

“If we go up close to the painting somehow we must be able to feel it”

sketch – according to John Berger this is how the eye sees a painting on a wall

Experiencing VR Galleries on SideQuest:

Initial VR designs in Gravity Sketch and Tiltbrush:

3 Different ideas all based from John Berger’s sketch on how the eye sees a painting on a wall.
Sketch 1 made in Gravity sketch, 2+3 make in Tiltbrush
Sketch 1: full circle to place artefacts
Sketch 2: semicircles placed on different heights in the same perimeter circle
Sketch 3: reverted John Berger’s sketch- two parallel walls with artefacts left and right creating an illusion that the corridor becomes small

Initial Feedback after midcrit:

Could disassociate our brains completely from a normal art gallery design and could maybe brainstorm in designing a gallery that is closer to the nature of the artefacts displayed. For example design a large dumpster that would act as a gallery.

Designing in Blender:

Designing and setting up the environments in Unity :

Gallery A:

Gallery B:

Mid term critique presentation

https://nanocrit.com/issues/issue7/trash-trash-art
Artists present trash as trash in very different ways. Dieter Roth archived his daily waste in his installation Flat Waste (1975-76/1992), a work which operates as both diary and self-portrait, and which offers the viewer the opportunity to delve into a library of his personal trash. Over the course of a year, Roth saved all of his waste material on a daily basis, flattening or folding it in order to place it all in transparent sleeves in ring binders chronologically filed on shelves. (Roth first made Flat Waste over the period of a year in 1975-6, but some of the volumes lost from the original year were replaced with binders of rubbish collected on the same day in later years.) Between every specially designed shelving unit, a lectern enables viewers to look at a selection of the ring binders–no waste escaped his collection: used toilet paper, toothpaste, train tickets and cigarette butts are all there. Roth’s preservation of decaying waste conveys to the viewer a sense of the scale of one individual’s everyday waste over a single year, and its ultimate fate as excess.
https://nanocrit.com/issues/issue7/trash-trash-art
https://nanocrit.com/issues/issue7/trash-trash-art
 “to think about what they don’t want to think about” because disposal breaks the connection between object and consumer, vanquishing waste from the consumer’s mind by making it appear to disappear
Another more recent work that brings together a unique chronicling of waste, the process of decay, and more recent forms of environmental art can be found in Joshua Sofaer’s The Rubbish Collection at the Science Museum, London (16 June–14 September 2014), part of the museum’s Climate Changing program.
https://nanocrit.com/issues/issue7/trash-trash-art
One iconic historic reference to this idea is the Daily Mirror’s 1976 headline, “WHAT A LOAD OF RUBBISH,” an attempt to capture the public outrage at The Tate Gallery’s (now Tate Britain’s) purchase of Carl Andre’s Equivalent VIII (1966), a purchase funded by British taxpayers during an economic slump. 
 The headline labeling the work as rubbish played with the idea that it was both trash and trashy, even though it was made out of new materials—firebricks—and nothing else. Recently turned into a painting by the collective Claire Fontaine, this iconic headline set the stage for establishing Equivalent VIII, which came to be known as “The Bricks,” as the epitome of “rubbish,” low quality contemporary art.
Garbage and generic waste management is a challenging task in modern cities. Every area has its peculiar waste production pattern in terms of kind and volume of produced waste, and optimizing collection is key to reduce costs and ensure at the same time that city decor is always maintained.For some cities, this task is made even more difficult due to the impossibility of installing underground containers. This is the case of Amsterdam, where in most part of the city center, garbage collection relies on citizens and tourists to drop trash bags at given collection spots, at given hours (twice a week). In this case, it is of course vital to optimize the collection process and to minimize the amount of trash bags accumulating at any of these spots.Many projects that aim to solve this problem involve some form of sensors to be scattered through the city, which would be responsible to collect data about garbage distribution (IoT-style). We find this approach expensive, both for installation and maintenance, not at all scalable and not environmental-friendly. The solution to environmental problems cannot be to produce and scatter even more disposable electronics all over a city.
You only look once (YOLO) is a state-of-the-art, real-time object detection system.
How It Works
Prior detection systems repurpose classifiers or localizers to perform detection. They apply the model to an image at multiple locations and scales. High scoring regions of the image are considered detections.
We use a totally different approach. We apply a single neural network to the full image. This network divides the image into regions and predicts bounding boxes and probabilities for each region. These bounding boxes are weighted by the predicted probabilities.
ARkit: Record spatial features of real-world objects, then use the results to find those objects in the user’s environment and trigger AR content
In iOS 12, you can create such AR experiences by enabling object detection in ARKit: Your app provides reference objects, which encode three-dimensional spatial features of known real-world objects, and ARKit tells your app when and where it detects the corresponding real-world objects during an AR session.
+Vuforia: Vuforia: object recognition
Object Recognition allows you to detect and track intricate 3D objects, in particular toys (such as action figures and vehicles) and other smaller consumer products. Use the Object Scanner and the accompanying object target scanning image to easily scan your detailed toys, models, and educational tools. 
Feedback:
Include instructions
Transition smooth
VR gallery – turn it upside down maybe huge garbage ? 
Move on with AB testing 
If we want to change the design approach we can do it for our second iteration
Explore them further later
Add duration of experience

Week 6 – Important feedback – [Tutorials with Ana, Christos, & Luke individually]

Tutorial 16/2 with Ana feedback:

  • Select common practices that happen with previs and try to focus in those alone e.g. camera position – select them in a smart way – find those who are best highlighted in VR
  • Camera, VFX, lights, blocking, positioning
  • Experiment with a couple that may work best in VR
  • How would we move them around? 
  • How can you tell a different story from that scene by just moving things? – essence of a story perspective – point of view

Two things to provide:
Experimentation tool -keeping track of how: developing tool
Present a scene – show experiment of the scene via the tool

Try out Unity Collaborate [*we tried out Unity collaborate it did not really work for us*]

Tutorial 16/2 with Christos feedback:

Assets: already made models in maya : character rigs etc.
Zoatropes Assets: animation / image sequence: in a previs it would be used to tell a story

Thought of VR ar a previs tool: a great tool, a great way to interact with other people on a project, understand volume, understand size and scale, understand characters environment and layout 
Lighting size scale and spatial awareness to build an environment

Visual production

Disney + : lion king behind the scenes stuff / extras : how they use game engines eg. unity and VR headsets to layout and build the environment – they literally built the entire lion king world

VR: being able to expand the worlds vs with Maya you are limited to the amount of RAM the computer has, graphic cards, and the scene itself VS game engines allow you to get around it.

Link on lion king and how they did stuff:

https://www.ibc.org/trends/behind-the-scenes-the-lion-king/4407.article

https://tinyurl.com/4c9mpxh5

  • Why use previs / main takeaway:
  • Using camera and lens to tell a story 
  • Setting up the lighting: a good thing to investigate
  • Animation does not have to be fancy at all; can be blocks
  • Prevising the cinematography
  • If we were the director: what lens would we use? 
  • Camera positioning
  • Movement through the scene
  • 3rd floor: https://thethirdfloorinc.com/
  • Research short movies/ stories that we take inspo from
  • Research on cameras , look at the angles and translating that into our own production, think about the narrative and how would we like our cinematography to draw that narrative
  • Think about what excites you on screen
  • Translate it into our own way : does this work? 

ITERATE A LOT / KEEP IT SHORT and succinct  and try to polish it

Is VR just a tool for us or is it also a great tool that helps the viewer immerse in that scene?

Tutorial with Luke B. 18/2

  • Time is everything with previs
  • It is all about the cameras than what is in front of the cameras
  • Basically using cinematography as a language to tell a story everything else is just props
  • Marvel films great with previs
  • VR for previs: virtual production and IR cameras
  • Pocket studio: a new tool 
  • Watch the behind the scenes of the Mandalorian
  • Previs and virtual production blending into each other
  • Virtual cameras – unreal engine can do that: connect imac or iphone  to unreal engine
  • Pocket studio is setup for this but only with android phones
  • Virtual production with real time rendering: game engines, virtual pipelines – forefront of the industry [VR]
  • Weta Digitals – Meerkat unreal engine – real time hair [weta digital released the scene files for free]
  • Unity is being used as well there is not as much though out there for tutorial in unity VS unreal engine (go watch virtual production tutorials in unreal engine)

10am Wednesday Previs lecture Luke B: 

  • In a VR project: more like a mapping of where things are, a blue print, how is this going to play through VS films more about cameras
  • Check company into the void: vr games and virtual theme parks : mapping out the world and mapping out the game use VR to walk around the environment, tell a story and not be boring
  • Magician trick: infinite circle
  • What you are trying to achieve before achieving it
  • Big problem with VR it is all experimental
  • Provide a recording of someone walking around a environment
  • Point: how to tell a story in 360 – essentially be the camera
  • Try to encourage the audience to want to be in a specific place where the action is happening
  • How to direct the audience

Check out:
Jon Favreau [ virtual production talks]
I am mother
Jungle Book
Avatar

Bibliography

Bridger, L. (2021) Tutorial with Adrika S Farhid, Ekaterini Tsoris zymnis, Inga Masliy, 17 February.

Sfetsios, C. (2021) Tutorial with Adrika S Farhid, Ekaterini Tsoris, Inga Masliy, 16 February.

Tudor, A. (2021) Tutorial with Adrika S Farhid, Ekaterini Tsoris, Inga Masliy, 16 February.

Interesting sources provided by Christos that will help our understanding of virtual production / VR previs:

Some links around pre-vis/post-vis/tech-vis:

Really important:

https://www.realtimeuk.com/blog/the-detailed-guide-to-previs/

https://www.artofvfx.com/tag/previs-supervisor/

https://theasc.com/ac_magazine/June2009/Previs/page1.html

Previs: Change the Game with Game Engines | Virtual Production | Unreal Engine – 2019

How Marvel Actually Makes Movies Years Before Filming | Movies Insider – 2021

George Lucas On The Pre-Visualization Process – 2009

How Film Directors and Previs Artists Work Together in Pre-Production- 2019

Planning our user research

Curation in a VR space

-Research question idea-

(This will be divided to 2 iterations: 

iteration 1 – layout/architecture of the gallery

Iteration 2 – create a narrative/story in the gallery)

How to make sure that the interior architecture of the gallery template that we design in par3 of the experience (along with its degrees of freedom to place the items), emphasise the importance of each item equally and has the ability to create a story?

–   Demographic questionnaire   –

Age

Gender

Level of education ( A levels? Undergraduate? Postgraduate? PhD?)

Professional Status 

Nationality

Income level

Any VR experiences

Is your profession involved in any design elements to do with VR

Interview Guidelines   –

Do you have access to VR?

Do you have any prior experience with using VR

Is the interface easy to use?

Is the workflow of the experience understandable?

What do you think about how information and features are layed out? 

If you could change one thing in the experience what would it be? 

If you collected items in the AR experience:

Were you able to lay the items in the gallery the way you wished to?

Is there something you would change in the architecture of the gallery?

-Make a plan of-

Where to recruit users- 

‘Close bubble’

When to recruit them- 

Once we have a base prototype

How many you’d need for your user research- 

3-5 people

VR as a tool for previsualisation – Research

What is Previs ?

Previsualization is a collaborative process within pre-production where filmmakers visually plan scenes with camera works, lighting, character movements, etc. (ScreenSkills, ND)

Traditionally, this process has been performed with drawings, concept images, sketches, etc., and it is not until recently that previs has been performed with 3D animation tools. (Wikipedia, 2021)

Previs artists usually start with a 2D storyboard. They create draft versions of the different moving image sequences and they put it all together using their compositing and editing skills. (Wikipedia, 2021)

History

Disney Studios was the first to officially use the term storyboard. The term was used a simple planning technique.

The making of the first three Star Wars films, beginning in the mid-’70s, introduced low-cost innovations in pre-planning to refine complex visual effects sequences.

1981 Spielberg – Raiders of the lost ark – miniature scale pieces models (CineFix IGN Movies and TV, 2015)

1982 Francis Ford Coppola in movie One from heart : Electronic cinema -> Through electronic cinema Coppola sought to provide the filmmaker with on-set composing tools that would function as an extension of his thought processes

3D computer graphics was relatively unheard of until the release of Steven Spielberg’s Jurassic Park in 1993.  – As a result, computer graphics lent themselves to the design process (Wikipedia, 2021)

Virtual Reality as a tool for previs

 Putting filmmakers into an actual film scene with the use of VR technologies

Pre-production is a phase that is focused around planning to optimize the later production phase. In films with complicated shots or with large amounts of visual effects, this planning can be harder to visualize and the costs for producing visual effects and CG-based effects is considerable. 3D animation tools such as Maya1 or Blender2 can provide creators with tools for animating characters, environment and cameras, and test different lighting, etc., hence providing resources for executing the planning process. However, these tools do not include the interactive layer that is an integrated part of game engines. (Ardal et al., 2019)

Game engines have been used for providing real-time experiences for other scenarios than pure gaming, by both including animation and 3D modelling from traditional 3D animation tools and interactive practices from gaming creation. These tools can also be an integrated part of previsualization, where filmmakers can immerse in film scenes and test different camera works, character movements, etc. in real-time . On the other hand, these 3D animation and game engines requires a high level of skill sets for modelling, animating and scripting. Hence, filmmakers might not have the required skill sets to perform previs. VR has the affordances of providing immersive experiences of being in virtual environments. Similarly, it can mimic reallife situations in various scenarios which induces interaction principles that people are accustomed to, e.g. picking up objects using hands or walking around in a physical space that is reflected in a virtual environment. (Ardal et al., 2019)

Case Study: Monocular

Monocular – an app in development that works in HTC Vive

“ System overview :

Creating visualizations in the system is divided into three phases: scene preparation, realtime animation and video export. 

The scene preparation phase is considered to be an offline editing phase, and uses the built-in editing tools in Unity. In this phase a specialized artist creates and assembles different 3D models to build up a virtual mock-up of the set. The 3D models typically originates from scenography models made by the art department, 3D scans of realworld locations or 3D models downloaded from online repositories. Rigged character models can be generated from a multitude of software packages. As this process is similar to many types of applications within games and visualization, we will not discuss it further in this paper. The realtime animation phase, however, is considered to be online, and simulate the action taking place on a real-world film set. This includes all action taking place in front of the camera, such as character movements and dialogue, as well as camera motion and the changing of optics (focal length and aperture). After animation is completed, the recorded camera shots are exported to video format in the video export phase. (Ardal et al., 2019)

Characteristics:

No keyframing

Miniature paradigm: the users puppeteer scale models of the characters and cameras. 

Incremental animation passes: The user incrementally records first each character’s motion, and then each camera. When recording each object, previously recorded animations on other objects are played back in real time” (Ardal et al., 2019)

Another Example: https://www.youtube.com/watch?v=mTRViYtusHM&fbclid=IwAR2NueLPF4wXSulZCqaaVSG_vlw14UE_3smZnF8ojoaVXJb-LOV77JHbS6k

Sources:

Ardal, D., Alexandersson, S., Lempert, M., Abelho Pereira, A T. (2019). A Collaborative Previsualization Tool for Filmmaking in Virtual Reality. In: ACM Digital Library. Available at: https://www.diva-portal.org/smash/get/diva2:1391474/FULLTEXT01.pdf [Accessed January 22, 2021].https://www.diva-portal.org/smash/get/diva2:1391474/FULLTEXT01.pdf

CineFix IGN Movies and TV (2015). How important is previz for films?! Film School’d . April 30, 2015. Available at: https://www.youtube.com/watch?v=8kL3iuGm9sA (Accessed: Jan 22, 2021)

Crimson Engine (2020). MAKING A MOVIE: Pre-vis can really help. Sep 7, 2020. Available at: https://www.youtube.com/watch?v=a4ABvBNA5qI (Accessed: Jan 22, 2021)

ScreenSkills (no date) Previsualisation (previs) artist. Available at: https://www.screenskills.com/starting-your-career/job-profiles/visual-effects-vfx/pre-production/previsualisation-previs-artist/ (Accessed: 15 February 2021).

The rusty trailer (2014). previs examples. Jan 12, 2014. Available at:  https://www.youtube.com/watch?v=JW_jylm1LVE (Accessed: Jan 22, 2021).

Wikipedia (2021) Previsualization. Available at: https://en.wikipedia.org/wiki/Previsualization (Accessed: 17 February 2021).

Initial Storyboard + process

Research on AR and object detection:

Background research in AR object detection

Background research waste types:

Plastic Waste:

Plastic products are very common in our modern life. According to the Pacific Institute, we used approximately 17 million barrels of oil just for producing plastic water bottles in 2006. Plastic waste is one of many types of wastes that take too long to decompose. Normally, plastic items can take up to 1,000 years to decompose in landfills. Even plastic bags we use in our everyday life take anywhere from 10 to 1,000 years to decompose, and plastic bottles can take 450 years or more.

Disposable Diapers:

In the United States alone, about 3.3 million tons of disposable diapers were thrown away in 2018. These disposable diapers take approximately 250-500 years to decompose in landfills, thus underscoring the importance of programs offering diaper and absorbent hygiene product recycling

Aluminum Cans:

About 42.7 billion aluminum cans, over 81,000 cans per minute, were recycled in America in 2019.But, at the same time, in every three-month period in the U.S., enough aluminum cans and packaging are thrown away—2.66 billion tons in 2018—to rebuild the entire American commercial air fleet.Aluminum cans take 80-100 years in landfills to completely decompose

Paper Waste:

Paper is the largest element in American municipal solid waste.Normally, it takes two to six weeks in a landfill to get completely decomposed, but can take decades, depending on moisture levels within the landfill.Recycling paper items saves a lot of landfill space while also reducing the energy and virgin material usage demanded by making non-recycled paper. Recycling paper items saves a lot of landfill space while also reducing the energy and virgin material usage demanded by making non-recycled paper.

Glass:

Glass is normally very easy to recycle due to the fact that it’s made of sand. By simply breaking down the glass and melting it, we can produce new glass. But the shocking fact is that if glass is thrown away in landfills, it takes a million years to decompose. And according to some sources, it doesn’t decompose at all

Other Waste:

Initial Storyboard on paper based on AR detection research, types non-linear narratives research and waste types research:

Resources:

https://www.thebalancesmb.com/how-long-does-it-take-garbage-to-decompose-2878033#:~:text=According%20to%20estimates%2C%20every%20year,years%20to%20decompose%20in%20landfills