Viewing entries tagged
virtual reality

Virtual Devising and Acting for Developing Experience, Story and Social Interaction Simulation.

Virtual Devising and Acting for Developing Experience, Story and Social Interaction Simulation.

This ongoing project investigates the application of immersive theatre and improvisation based devising methods in the development of room scale virtual reality experiences.

The projects allow a participant to put on a Vive head mounted display and interact with a virtual avatar performed by a live actor and a virtual environment in real time. Each environment is associated with a rough story idea. The participant can improvise interactions and dialogue with the live actor. Some variations of this setup also introduce an additional character pre-recorded/captured by the same or another actor. Most environments include physical props that match locations of some virtual objects, creating a possibility of haptic feedback. With attached optical markers, some of the props are also physically manipulatable. Through this work we seek to gain better understanding for developing innovative VR experiences that involve co-presence and cooperation among multiple participants, haptic based on real objects with the foreseeable applications in the arts as well as education, various types of training and multi-player simulation.

The technical setup takes places inside a 40x40’ volume with 20x20’ trackable area and a projection screen for the audience and the actor. Physical furniture and props provide haptic feedback for the participant. Besides the furniture and the screen, spike tape marks on the floor guide the actor.  We combine optical tracking of the live actor and physical props with HTC Vive headset and controllers tracking via the lighthouses. Vicon Blade in combination with Unity 3d or Motionbuilder is used for prototyping and developing the experience. Immersive sound is optionally used in the experiences allowing the participant to hear actor’s voice through headphones as they speak through wireless microphone. 

Method of Loci: Multi-scaled Integrated VR for Collaborative Meaning Making

Method of Loci (a mnemonic system in which items are mentally associated with specific physical location) Alan Price (Design); Isla Hansen (Art); Scott Swearingen (Design); Norah Zuniga Shaw (Dance); Michelle Wibbelsman (Latin American Indigenous Cu…

Method of Loci (a mnemonic system in which items are mentally associated with specific physical location) Alan Price (Design); Isla Hansen (Art); Scott Swearingen (Design); Norah Zuniga Shaw (Dance); Michelle Wibbelsman (Latin American Indigenous Cultures); Ben McCorkle (English). Demo Location: SIM Lab.

PROVOCATION

We set out to explore modes of interaction between users immersed in VR with a Head Mounted Display, and users with an external, third-person perspective using a multi-touch display. The design intent was to draw awareness to the differences in scale and perspective, engaging users in a process of collaboration that requires navigation and communication across the two modalities and encourages awareness of both digital and physical experience.

MAKING

The current outcome is a networked multi-user VR collaboration space that encourages experimental making and play through collective creation, assembly, and recording. A mobile web app is used to upload images, sound, and video, as well as 3d models, in real time, to contribute to a growing and malleable virtual world. Inside this world, users can move, combine, and attribute physical properties to objects, videos, and sounds. Recording these movements, users can create animations, drawings, and spatial soundscapes. Objects take on meaning through the users’ intent, creating associations through composition and movement in the virtual space. The system can be used for staging games, collective sense-making, storytelling, or other purposes to be discovered.

REFLECTION

Critical thinking and research in the domain of humane technology can include ongoing study of the design of interfaces; the design of modes of interaction; the design of technology that can enable us to freely converse between physical and digital constructs. Developing systems that promote reflection by its users on how we understand our engagement with systems and how we can engage with one another through a system, benefits from focusing on the attributes that support or expose a deeper dialog about the mechanisms operating to enable that engagement.

 

Birdbot: Encouraging Full-bodied Play in VR Fantasy World

Birdbot flyover: flap your arms to drift over various compassionate landscapes as conceived and created by students in Design. Norah Zuniga Shaw (Dance, Principal Investigator); Alice Grishchenko (Lead Designer); Isla Hansen (Art); Maria Palazz…

Birdbot flyover: flap your arms to drift over various compassionate landscapes as conceived and created by students in Design. Norah Zuniga Shaw (Dance, Principal Investigator); Alice Grishchenko (Lead Designer); Isla Hansen (Art); Maria Palazzi (Design), and students in Palazzi's Design 6400 class: Breanne Butters, Stacey Sherrick, Sarah Lawler, Zachary Winegardner, Kevin Bruggeman, Devin Ensz, Bruce Evans, Dreama Cleaver, Kien Hong. Demo Location: SIM Lab @ accad.osu.edu.

Birdbot balance: Rise through virtual woulds and make music with your wings as you achieve balance challenges in VR: Norah Zuniga Shaw (Dance, Principal Investigator); Alice Grishchenko (Lead Designer); Isla Hansen (Art); Maria Palazzi (Design); Dem…

Birdbot balance: Rise through virtual woulds and make music with your wings as you achieve balance challenges in VR: Norah Zuniga Shaw (Dance, Principal Investigator); Alice Grishchenko (Lead Designer); Isla Hansen (Art); Maria Palazzi (Design); Demo Location: SIM Lab @ accad.osu.edu.

PROVOCATION

Get moving in VR! BirdBot grew out of an early Sandbox Collaboration we had using the Kinect to get good full-body interaction in virtual reality (rather than just being able to move or play with things using controllers). It is also a response to one of our core research interests in this project which is to create more physically active and stimulating virtual reality experiences. 

The resulting prototype is what we call a "movement toy" and there are a few movements we targeted specifically including "balance," "level changes," and any gross motor action (in this case flapping the arms). But really any desired movement could become a mechanic of this "toy." 

MAKING

We created is a series of Virtual Environments for the Oculus Rift using a Kinect as our sensor. One of our creative interests was to see what happens when we start with a movement idea and let the virtual world grow from there. A movement creates a story and the story creates the world. So it was a very intuitive, emergent process and evolved through many iterations that existed in the collaborative space between our minds/bodies. We had some fantastic brainstorming sessions with visual artist Isla Hansen about making a physical installation to experience while in VR and will continue that going forward. The nature imagery and heron came from our discussions about de-centering the human and making non-mirrored interfaces. When you put the headset on and enter the world of Birdbot you are in a peaceful room with grids on the walls but it is filled with trees and your shadow is a heron. If you flap your arms, a hidden world is revealed and as you balance on one foot (a challenge in VR) you rise up into a bright pink tunnel where you can make music with light-up chimes. Finally you enter a flyover world where you soar over a collage of compassionate landscapes that were created by students in our Teaching Clusters, including a tapestry made up of family photographs compiled from our research team.

REFLECTION

As always in the iterative design process, some of the things we tried out but didn’t use provided us with fun learning experiences and make the work stronger.  The challenges of computer recognition of particular motions is a long-standing issue but the KINECT has made things easier and it is fantastic to see people moving and laughing and feeling good in VR.

Further relfection by Alice Grishchenko at http://www.humanetechosu.org/humaneblog/2017/5/12/bird-thoughts

Collaborators: Norah Zuniga Shaw (Dance, Principal Investigator); Alice Grishchenko (Lead Designer); Isla Hansen (Art); Maria Palazzi (Design), and students in Palazzi's Design 6400 class: Breanne Butters, Stacey Sherrick, Sarah Lawler, Zachary Winegardner, Kevin Bruggeman, Devin Ensz, Bruce Evans, Dreama Cleaver, Kien Hong. Demo Location: SIM Lab @ accad.osu.edu.

Bird Thoughts

Bird Thoughts

Alice Grishchenko, MFA student in design and a key collaborator on the Humane Technologies team writes:

I loved making Birdbot. It is a virtual world that encourages certain motions, it isn't really a VR game, sometimes we call it a toy. It was a non-linear process to create it. Norah calls that emergent. We started from a place of abstract interactions, prototypes and a jumble of 3D models and textures pulled from many different sources and somehow we ended with a surreal 3 part experience, of which the most visible linking themes are birds and shadows. Personally, I started with some questions like:

  • What is achievable with this cross section of technology?
  • Which types of movement are engaging? 
  • Which environmental designs encourage engaging movements and provide discernible and satisfying feedback to the user? 

Each of these questions is actually a cascade of many other questions that should ultimately be answered by players interacting with the system, but before having those answers you have to create a system by anticipating them. Hypothetical answers are tricky, so I started with some low investment prototypes that looked like this:

The goal was to get the player moving in a fun way by creating an interactive environment. I tested many interactions with physics simulations, flying, floating, and rhythmic movement. I would run these by Norah and we'd talk about the intention compared to the feeling of the environment, and repeat this playtesting process with other collaborators to see what perspectives they could bring. Through this process we created three different interactive environments, then connected them with visual transitions and common themes. Slowly we started to solidify which features we wanted to develop in each scene. The three stages became surreal, calm spaces with strange gravitational properties and shadowy avatars that represent otherness. The levels remain separate in the mechanics of their interactions, balancing, reaching out for virtual contact and flapping arms in a way that imitates flight. 

We connected the three levels in a way that creates the experience of moving from an enclosed space, upwards to a vast open space and then forwards into a tunnel that leads back to the beginning of the experience. The content of the levels changes dramatically once the player ascends to the open space. This is because we enlisted the help of Maria Palazzi's class to create compassionate landscapes for the player to soar over. The class's work is combined to generate a procedural world that changes over time. Collaborating with an entire class of people for a week was a really unique experience for me, and Maria's class delivered some great insights and beautiful assets to the work that made it much richer. I worked with Skylar Wurster to develop the spherical procedural landscapes and we used some custom shaders to fade between ground and (sideways) sky textures. 

A compassionate landscape/interpretation of how birds may view an urban environment

A compassionate landscape/interpretation of how birds may view an urban environment

All these changes between the three levels and all their components provide players with many unusual experiences and sensations one after another. I think the piece really leads the player to think about identity and the journey of connection they just experienced.

Humane Object Agency: Part Two, Implementation

Humane Object Agency: Part Two, Implementation

Collaborating faculty Matthew Lewis writes: In a previous blog post I described my introduction to the Humane Technologies project and my intentions for the pop-up week: exploring the use of interactive virtual reality to simulate an Internet of Things (IoT) filled space, with participants embodying the roles of the communicating smart objects inhabiting the environment. Leading up to the big week, I met with several of the participating faculty who gave me invaluable suggestions for additional readings, relevant pop culture references, and other perspectives on possible "motivations" for the IoT devices to be simulated in the project.

During the pop-up week Professor Michelle Wibbelsman and I met with Professor Hannah Kosstrin's dance class and explained the basic idea of the project. Michelle and I had come up with a few exercises/scores with different emphases for the students to try out. For example, we initially split the students into two groups, and requested that one group take a dystopian perspective of IoT devices, while the other group imagine a more utopian viewpoint. While the devices in the later group focussed on keeping the apartment inhabitant happy and comfortable, the former group embodied more of an overbearing nanny/salesperson space. For the initial round, we had requested that the performers communicate primarily via motion. There was a strong tendency however to want to speak primarily to the person in VR and communicate in general via anthropocentric means. For the next round we requested that communication only be through movement, and primarily between the IoT devices, rather than focussing on communicating with the the apartment's inhabitant. Additionally we asked some performers to take on the roles of aspects of the communications infrastructure: one dancer was "Wi-fi" and others were "messages" traveling through the network, between the devices.

There was very little time given for planning between each performance/simulation, so most of the systems and processes resulting were improvised during each performance. As a result very little actual motion-based successful communication took place (though lots of attempts were made.) However these sort of initial experiments using no technology in the classroom gave us a great deal of information and discussion points for our technology-based experiences a couple of days later.

Several people were involved in the implementation of the quickly assembled technological system. I initially had specified the desired system features and set up the physical system components. Skylar Wurster (Computer Science undergrad) and Dr. J Eisenmann (ACCAD alumnus / Adobe research) implemented the interaction and control scripts in the Unity realtime 3D environment. Kien Hoang (ACCAD Design grad) assembled a 3D virtual apartment for the VR environment. 

Professor Kosstrin participated in the role of the inhabitant of the VR apartment. At Professor Wibbelsman's suggestion we avoided naming this character so as to avoid too strongly biasing our notions of their role (e.g. "owner", "user", "person", "human", "human object", etc.) We ended up frequently making a stick figure gesture mid-sentence to refer to them during our discussions. It was intended that as the physical performers were communicating outside of VR, there would be some indication inside VR that the virtual smart objects were talking to one another. A few visual options were implemented in the system: the objects could move (e.g. briefly "hopping" a small amount), they could glow, or they could transmit spheres between one another, like throwing a ball. Given the motion-based communications we were attempting with the dancers, I chose to use primarily the movement method to show the VR appliances communicating. This was implemented with a slight delay: if the smart chair was going to send a message to the smart TV, first the chair would move, then the TV would move, as if in response. I imagined this being perceived like someone waving or signaling, followed by the message recipient responding by waving back.

We investigated two methods for connecting communications in the physical and virtual worlds. In our first trials, we simply relied on an indirect puppetry approach. A student at a workstation (Skylar) watched the dancers, and when one started communicating to another, he would press an appropriate keyboard button to trigger the communication animation in the virtual world. For one of the later runs, Ben Schroeder (ACCAD alumnus / Google research), Jonathan Welch (ACCAD Design grad), and Isla Hansen (ACCAD Art faculty) all contributed solutions to enable the dancers to touch a wire to trigger a communication. While this had the advantage of allowing direct control for the performers of their virtual counterparts, the downside was, it placed limitations on their movement possibilities. Regardless, inside VR, the movement of the appliances did not read for our VR participant as communication: "Why is the refrigerator hopping?" Time during the brief session didn't allow for experimentation with the other communication animation approaches, but I suspect some of the other modes might have fared better.

Professor Wibbelsman led the group in discussion and we quickly discovered that our goal of eliciting new ideas about future possibilities for these emerging technologies seemed to be a success: everyone had a great deal of strong opinions about what might emerge and big questions about what they might be more or less comfortable with. One further practical consideration that emerged was the need for dancers to use a separate "narration" voice to communicate with the person in VR, to tell them things they needed to pretend were happening in VR as the improvisation ran its course (e.g. a refrigerator door opening and giving them access to ice cream.) Despite the pop-up providing an invaluable week of time for everyone to focus on prototyping projects such as these, one of the more surprising challenges was having access to people's time. Many of the details of the project were not the result of well considered design decisions but rather because that was what the person who popped-up to work for an hour or two could accomplish before jumping back out to a different project.

Humane Object Agency: Part One

Humane Object Agency: Part One

Collaborating faculty member Matthew Lewis writes:  I arrived at the humane technologies project and group later than most of the participants. I was invited to participate in the pop-up week which would focus on virtual reality this semester. I've been curious about using VR technologies for interface prototyping, and this seemed like a great opportunity. As with all pop-up participants, I was encouraged to consider either joining existing project groups, or to bring my own ideas to the table.

Not having been part of the earlier discussions, my unbiased ideas about "humane technologies" primarily involved evaluating people's interactions with the technology emerging around them in positive and negative ways. In particular, I've been reading almost daily newspaper articles about the "internet of things" (IoT). Usually these discussions center on debates between convenience vs privacy: e.g. your internet connected devices are controllable via your smartphone, but they also report your engagement to advertisers for marketing purposes. 

Discussions of the Internet of Things tend to predict that smart objects will be increasingly communicating in complex webs of systems which may or may not have our best interests in mind. In the same vein as it is often said that, "you are not the consumer but rather the product" for companies like Facebook, networked smart objects like your TV might be "free" to use as well, in exchange for you allowing an infrared camera to monitor your apartment and track your eyes as you watch TV.

With this content in mind at my first humane tech meeting, I heard Professor Michelle Wibbelsman (Spanish & Portuguese) mention two things that resonated for me: indigenous peoples' beliefs about objects having agency, and also "Object Oriented Ontologies" (OOO). I was curious about the idea that some cultures may have already thought a great deal about how to live surrounded with objects that have agency. Additionally, "Object Oriented Ontology" is a relatively recent perspective on metaphysics that’s attracted some attention from computer scientists working at the intersection of philosophy and human computer interaction. OOO involves a de-centering of humans that considers physical objects, ideas, their relationships, and agencies all as equally valid objects of philosophical consideration.

At this same initial meeting, Professor Hannah Kosstrin (Dance) mentioned that her motion analysis class's graduate students would be available to participate in projects during the pop-up week. Years ago I was fascinated by a presentation I’d seen on "service prototyping" which used actors as participants for interactive system design. I proposed that Hannah's students could embody the roles of communicating IoT devices, exploring the possibility space of system agency. Many IoT species will converse primarily with other smart objects and networked systems, rather than interacting directly with people in their space. What might such devices be "talking" about? What could their awareness and motivations encompass in different future scenarios?  

Additionally, I envisioned another participant immersed in a VR apartment environment, experiencing representations of these devices communicating around them. For example, there might be an indication that a smart TV, smart refrigerator, and smart couch were all observing aspects of their environment, and "doing their job" whatever that might be. What would this be like to live in such a space? 

I suspected that by embodying this simulation/performance, it might lead to thought provoking discussion, helping us to contemplate aspects of such emerging technologies and trends in ways we might not have otherwise considered from mere thought experiments.  I also hoped we might gain insight into the humane technology aspects of IoT, beyond the current discussions of privacy vs. convenience. Last, I hoped to gain experience with the usefulness of VR for interaction design prototyping. In a followup post, I'll discuss the implementation and outcomes of the pop-up.

Vita Berezina-Blackburn: Storytelling Potential Using Motion Capture

As the Humane Technologies research team first began contemplating the 2016-2017 "Livable Futures" theme in Autumn semester, we held a series of sandbox sessions in the ACCAD labs and studios, each led by a different team member. The purpose of these sandboxes was to engage in a "doing thinking" process together with various humane technology frameworks in order to explore potential lines of inquiry, develop research questions, and build relationships. What follows are notes developed in conjunction with this particular sandbox session. 

Sandbox: Motion Capture with Vita Berezina-Blackburn

Wednesday, November 30, 9:30-11:30am in the ACCAD Motion Lab

Attendees: Vita Berezina-Blackburn, Alex Oliszewski, Norah Zuniga Shaw, Peter Chan, Scott Swearingen,Scott Denison, Alan Price, Mindi Rhoades, Hannah Kosstrin, Isla Hansen

Sandbox Framework for Collaboration:

Investigation of approaches for presenting narratives in full body, room scale VR scenarios driven by practices in theater production and acting. The Sandbox will include demos of ACCAD's current state of available technologies and existing VR experiences from the Marcel Marceau project, as well as related creative practices. Tech: Vicom Motion Capture System, Motion Builder, Oculus.

Anticipation / Expectation:

• VR, motion capture, and training performers, live storytelling in physical and virtual worlds, theater artists driving VR creation

Disposition / Experience: 

Thoughts gleaned from participants during and after the sandbox: 

• Two characters were having a conversation in a science fiction future and I was able to walk around as an invisible third-party (fly on the wall) and observe.

• The conversation was secondary as I was exploring the view and props from this high-rise virtual set design. But I could have easily replayed the scene, taken a seat beside them and listened more intently the second time.

• Is this a significantly more entertaining means of experiencing narrative?

• The thought of 'stepping' into someone's experience was very interesting, and whether or not I would be more likely to follow his mesh or his shadow.

• When doing 180-degree turns in VR I need some sort of reflection so I can see his movement when he goes off-screen.

• Having multiple instances works well pedagogically or as a learning environment, but not so much from the perspective of "appreciate this historic performance."

• Having a CG hand that can interact with the environment would be useful and engaging. Placing an invisible trigger-box around it could easily test for collision.

• Using headphones would connect with experience better b/c audio would be more contextually sensitive. For instance, MOCAP lab walls bounce sound differently than the tight quarters I was experiencing in VR. Scale is always an issue.

• In some ways this reminded me of 'manual cinema', but the audience would also need headsets to approach parity with actors.

• The concept of 'priming for the meta-aesthetic' was very interesting.

Reflection / Opportunity:

• The technical aspects of this are way over my head, but I wonder if this could be done with multiple Google Cardboard to avoid the tethering requirement of Oculus? 

• As in the Marcel Marceau experiment, are we able to learn faster/more through embodied experiences, i.e. could someone practice an interview or social etiquette this way? 

• Could the viewer/reader/player use something like this to inspect props/evidence within the scene to help solve the crime? With the addition of more sophisticated facial detail and scanning at the input stage might we also have been able to study character behaviors?

• Could designers use a similar approach to experience thought problems and test critical thinking?

• Could we build a scene or environment with all the trappings of the “problem space”, especially one that is remote or in a faraway place, in which designers can immerse themselves for study?

• I wonder if MOCAP style labs will replace some studio spaces, i.e., desks and laptops, with untethered headsets and communal, embodied experiences/learning?

• What could we accomplish with scale? Could either 'watch' or 'follow' and have full understanding of entire body and weight distribution throughout the performance and not have to piece together anatomy that's off-screen.

• Matt Lewis suggested the podcast 'Voices of VR' - interviews with the movers and shakers of virtual reality... sounded awesome.

• Why did the character that we embodied during this exercise assume we were 'physical' (Why not a droid/ghost/spectre like Sally was)? That could help explain some of the physical/VR inconsistencies related to navigating the space.

Alan Price: Testing Virtual Reality Interfaces

As the Humane Technologies research team first began contemplating the 2016-2017 "Livable Futures" theme in Autumn semester, we held a series of sandbox sessions in the ACCAD labs and studios, each led by a different team member. The purpose of these sandboxes was to engage in a "doing thinking" process together with various humane technology frameworks in order to explore potential lines of inquiry, develop research questions, and build relationships. What follows are notes developed in conjunction with this particular sandbox session. 

Sandbox: Kinect/Oculus Playdate with Alan Price

Wednesday, September 28, 9:30-11:30am in the ACCAD SIMLAB

Attendees: Stephen Turk, Candace Stout, Peter Chan, Scott Swearingen, Scott Denison, Alan Price, Norah Zuniga Shaw, Isla Hansen, John Welch

sandbox kinect/Oculus playdate with Alan

Anticipation / Expectation:

• To promote discussion and questions about full body engagement and motion in VR, capturing action with playback and real time drawing, and representation in VR spaces...

• To pose the question “what is this for?”

• To explore the VR format (presumably a current interest in use of HMDs with head tracking).

• To explore the embodiment in virtual space; multi-sensory compared with full-body engagement and representation (point-of-view/ gaze).

• To explore the recording of motion (playback, reflection, analysis, of how participants move and engage over time).

• To explore the internal development (starting the process of developing tools for portable templates and future sandboxes created in-house).

• To focus on the user reflecting upon his/her own body as the active element in the space, independent of any encumbrances such as hand-held wands or game controllers.

Disposition / Experience:

• How people are able to physically engage in a virtual space” in interesting, new, creative and/or healthful ways.

• What makes the VR Player do things that are fun to watch as well as fun for them?

• How desired motions could drive the game mechanics such as a desire for people to extend the range of motion, to change levels, to make cross lateral patterns and balance?

• Could additional bodies in space in the VR experience (either inside or outside) create a more interesting learning environment for a viewer / user / player?

• Could you create a dance score with moving objects in the virtual realm?

If so -- what are these objects?

• Who is our intended / ideal Audience? ... How do we want our experience to relate to and possibly change who they are or how they think?

• How can we enhance the experience to make evaluative design decisions within the virtual space?

• How to teach game design through new technologies that are not yet fully realized. (SS)

• How to better navigate the world than using handheld devices?

• Could it be that games are real, and toys are not? ... The context is fiction, but the decisions are real - and lasting.

Reflection / Opportunity:

• VR player as performer...

• We felt that interacting with our own recording-motion and the traced-forms made us more aware of their bodies (for better or worse).

• Obviously modeling of any kind is a richer experience in 3D, if I can build in layers and then dimensionally look through them.

• Recording motion was a hit. ... I want to go back in now and try to choreograph those figures.

• We were toying with the idea of human Tetris style game that did not require a lot of space to play and the environment could scale to your available real-world play space.

• It was very interesting for me when I began to think about physical motions as ‘player mechanics’ in a game-related environment.

• The third person perspective and omniscient high viewpoint were of interest.

• I really, really wanted my avatar to be an ‘it’.

• We are interested in play spaces that are physically, socially and creatively engaged.

• I’d like a humanist to help think about narrative and ethical contexts of some of this work and the relationship to post-humanism.

• This VR work that is in conversation with Ghostcatching, a kind of partial reconstruction would be fun.

• I’d like to make a 3d drawing experience that takes IMPROV TECHNOLOGIES into VR.

• I’d like to make something that invites cross lateral motion.

• The big thing I am thinking about is the place of movement qualities in a VR environment and how training a user to engage movement qualities could lead to more empathetic interactions with the world from a renewed understanding of one’s own movement proclivities which inevitably connect to emotions (how do humane technologies work toward that end). I am thinking specifically from the vocabulary associated with the Laban systems for movement qualities.

• I’m considering this balance as to how each medium [movement improvisations and VR generated environments] retains its integrity, but enhances the best traits about the other.... perhaps this ties into the discussion empathy and self/group awareness.

• I am thinking about the followings—the relationship between avatar and player; player driven goals; connections between environments; visual themes; activities; and the external world.

Alex Oliszewski: Gaming Environments in Virtual Reality

As the Humane Technologies research team first began contemplating the 2016-2017 "Livable Futures" theme in Autumn semester, we held a series of sandbox sessions in the ACCAD labs and studios, each led by a different team member. The purpose of these sandboxes was to engage in a "doing thinking" process together with various humane technology frameworks in order to explore potential lines of inquiry, develop research questions, and build relationships. What follows are notes developed in conjunction with this particular sandbox session. 

Sandbox: VR Playdate with Alex Oliszewski

Friday, September 23, 1-4pm in the ACCAD collaborative space (aka the living room)

Attendees: Ben McCorkle, Norah Zuniga Shaw, Alan Price, Peter Chan, Alex Oliszewski, John Welch

Sandbox VR Playdate

Anticipation / Expectation:

• Getting started by experiencing a wide range of VR games that invite full body motion, allow creative open-ended play, explore space and the brain's sense of motion and ask how they might be re-performed or hacked for artistic creation.

• Connection and creativity in VR, and pushing at what they can do.

Disposition / Experience:

• Tension between my body’s sense of space and the actual range in which I have to move (players “backing up” in order to see something better in certain game environments). Issue of scale. Teleporting is dissatisfying. 

• Certain actions in the game inspire level changes and Tilt Brush is amazingly as inspiration for motion. It is great to watch people draw in 3D space.

• Play between what is happening virtually and in physical space/time and learning the etiquette of VR takes time. 

• Play between what the brain understands as actual experience is an on-going question including the potential for manipulation, illness, changing experience forever (matrix dystopias). 

Reflection / Opportunity:

• What inspires motion in these VR environments and what kinds of motions do we want to encourage if any?

• If post-human is not anti-human then how indeed might we want these technologies to evolve?

• How locomotion and teleporting in VR impact the sensation of space? What would be better?

• What are the inspired desires for multiple sensors on the body in the VR environment?

• How might knowledge in the performing arts be used to enhance embodied creativity in virtual spaces?

• What about experiential process in dance and things that are paced for exploration and self-discovery?

• How can we use the potential for manipulation in VR (particularly the brain’s sense of motion) as a space for play and well-being?

• What about world creation in virtual environments and using dance improvisation scores as world builders?