Viewing entries tagged

Marientina Gotsis Pop Up Talk

Marientina Gotsis Pop Up Talk

Reflection contributed by Dan Shellenbarger

Marientina Gotsis shared her work and research during her flash presentation for the Humane Technologies Conference.  She founded and currently directs the USC Creative Media & Behavioral Health Center, where she also founded and directs the USC Games for Health Initiative.  Her work involves translating interactive media innovations to health practitioners. She works works to intersect art, neuroscience, medicine, and public health.

She showed projects that were developed to help in the research fields including Parkinson's and mental health.  In the area of mental health she share a quote from Jaak Panksepp, a neuroscientist, who provides an inspiration to her work:

“Mental health ultimately means that an individual, through rich emotion-affirming encounters with living, has integrated his or her life in such a way that the emergent self-structures, deeply affective, can steer a satisfying cognitive course through future emotional jungles of lived lives.”

In her work and the work that is developed in the USC center, she is able to facilitate these “emotion-affirming encounters” through technological innovation.

Using the innovations of design, medical and health professional, and the arts, Gotsis works to develop platforms which include VR and AR, among other technologies, to help people with afflictions.  It is her collaboratory nature that produces innovative products to help with complex health problems. An example Gotsis showed was a shoe that was embedded with sensors that is connected to headphones.  Depending on the speed and the actions of the user, the user hears ocean sounds from either outside of the water or from deep in the water. This simulation is helpful to work with patients with balance disorders and for gait rehab.

Though Gotsis’ started in painting and drawing she branched out to interdisciplinary work.  She said she was always exposed to technology and even worked at an IT company early in her career.  She found that any one academic area does not have the monopoly on creativity--it takes place in any area. As a teacher at USC, she said it is good to have less disciplinary boundaries.  Her current goal with her center to is gather lots of different people from lots of different disciplines to attend to bettering human life. She remarked that collaborative work between artists and scientists works better when all are brought in and respected for each of their respective areas than when it is set up as a scenario where an artist seeks and engineer or visa versa.  Bringing all disciplines to a table to initiate a goal leverages each participants scope and approach to problem solving.

Gotsis work is exciting and beautiful to experience with the added benefit it can be palliative as well as moving.


Birdbot: Encouraging Full-bodied Play in VR Fantasy World

Birdbot flyover: flap your arms to drift over various compassionate landscapes as conceived and created by students in Design. Norah Zuniga Shaw (Dance, Principal Investigator); Alice Grishchenko (Lead Designer); Isla Hansen (Art); Maria Palazzi (Design), and students in Palazzi's Design 6400 class: Breanne Butters, Stacey Sherrick, Sarah Lawler, Zachary Winegardner, Kevin Bruggeman, Devin Ensz, Bruce Evans, Dreama Cleaver, Kien Hong. Demo Location: SIM Lab @ .

Birdbot flyover: flap your arms to drift over various compassionate landscapes as conceived and created by students in Design. Norah Zuniga Shaw (Dance, Principal Investigator); Alice Grishchenko (Lead Designer); Isla Hansen (Art); Maria Palazzi (Design), and students in Palazzi's Design 6400 class: Breanne Butters, Stacey Sherrick, Sarah Lawler, Zachary Winegardner, Kevin Bruggeman, Devin Ensz, Bruce Evans, Dreama Cleaver, Kien Hong. Demo Location: SIM Lab @

Birdbot balance: Rise through virtual woulds and make music with your wings as you achieve balance challenges in VR: Norah Zuniga Shaw (Dance, Principal Investigator); Alice Grishchenko (Lead Designer); Isla Hansen (Art); Maria Palazzi (Design); Demo Location: SIM Lab @ .

Birdbot balance: Rise through virtual woulds and make music with your wings as you achieve balance challenges in VR: Norah Zuniga Shaw (Dance, Principal Investigator); Alice Grishchenko (Lead Designer); Isla Hansen (Art); Maria Palazzi (Design); Demo Location: SIM Lab @


Get moving in VR! BirdBot grew out of an early Sandbox Collaboration we had using the Kinect to get good full-body interaction in virtual reality (rather than just being able to move or play with things using controllers). It is also a response to one of our core research interests in this project which is to create more physically active and stimulating virtual reality experiences. 

The resulting prototype is what we call a "movement toy" and there are a few movements we targeted specifically including "balance," "level changes," and any gross motor action (in this case flapping the arms). But really any desired movement could become a mechanic of this "toy." 


We created is a series of Virtual Environments for the Oculus Rift using a Kinect as our sensor. One of our creative interests was to see what happens when we start with a movement idea and let the virtual world grow from there. A movement creates a story and the story creates the world. So it was a very intuitive, emergent process and evolved through many iterations that existed in the collaborative space between our minds/bodies. We had some fantastic brainstorming sessions with visual artist Isla Hansen about making a physical installation to experience while in VR and will continue that going forward. The nature imagery and heron came from our discussions about de-centering the human and making non-mirrored interfaces. When you put the headset on and enter the world of Birdbot you are in a peaceful room with grids on the walls but it is filled with trees and your shadow is a heron. If you flap your arms, a hidden world is revealed and as you balance on one foot (a challenge in VR) you rise up into a bright pink tunnel where you can make music with light-up chimes. Finally you enter a flyover world where you soar over a collage of compassionate landscapes that were created by students in our Teaching Clusters, including a tapestry made up of family photographs compiled from our research team.


As always in the iterative design process, some of the things we tried out but didn’t use provided us with fun learning experiences and make the work stronger.  The challenges of computer recognition of particular motions is a long-standing issue but the KINECT has made things easier and it is fantastic to see people moving and laughing and feeling good in VR.

Further relfection by Alice Grishchenko at

Collaborators: Norah Zuniga Shaw (Dance, Principal Investigator); Alice Grishchenko (Lead Designer); Isla Hansen (Art); Maria Palazzi (Design), and students in Palazzi's Design 6400 class: Breanne Butters, Stacey Sherrick, Sarah Lawler, Zachary Winegardner, Kevin Bruggeman, Devin Ensz, Bruce Evans, Dreama Cleaver, Kien Hong. Demo Location: SIM Lab @

Humane Object Agency: Part Two, Implementation

Humane Object Agency: Part Two, Implementation

Collaborating faculty Matthew Lewis writes: In a previous blog post I described my introduction to the Humane Technologies project and my intentions for the pop-up week: exploring the use of interactive virtual reality to simulate an Internet of Things (IoT) filled space, with participants embodying the roles of the communicating smart objects inhabiting the environment. Leading up to the big week, I met with several of the participating faculty who gave me invaluable suggestions for additional readings, relevant pop culture references, and other perspectives on possible "motivations" for the IoT devices to be simulated in the project.

During the pop-up week Professor Michelle Wibbelsman and I met with Professor Hannah Kosstrin's dance class and explained the basic idea of the project. Michelle and I had come up with a few exercises/scores with different emphases for the students to try out. For example, we initially split the students into two groups, and requested that one group take a dystopian perspective of IoT devices, while the other group imagine a more utopian viewpoint. While the devices in the later group focussed on keeping the apartment inhabitant happy and comfortable, the former group embodied more of an overbearing nanny/salesperson space. For the initial round, we had requested that the performers communicate primarily via motion. There was a strong tendency however to want to speak primarily to the person in VR and communicate in general via anthropocentric means. For the next round we requested that communication only be through movement, and primarily between the IoT devices, rather than focussing on communicating with the the apartment's inhabitant. Additionally we asked some performers to take on the roles of aspects of the communications infrastructure: one dancer was "Wi-fi" and others were "messages" traveling through the network, between the devices.

There was very little time given for planning between each performance/simulation, so most of the systems and processes resulting were improvised during each performance. As a result very little actual motion-based successful communication took place (though lots of attempts were made.) However these sort of initial experiments using no technology in the classroom gave us a great deal of information and discussion points for our technology-based experiences a couple of days later.

Several people were involved in the implementation of the quickly assembled technological system. I initially had specified the desired system features and set up the physical system components. Skylar Wurster (Computer Science undergrad) and Dr. J Eisenmann (ACCAD alumnus / Adobe research) implemented the interaction and control scripts in the Unity realtime 3D environment. Kien Hoang (ACCAD Design grad) assembled a 3D virtual apartment for the VR environment. 

Professor Kosstrin participated in the role of the inhabitant of the VR apartment. At Professor Wibbelsman's suggestion we avoided naming this character so as to avoid too strongly biasing our notions of their role (e.g. "owner", "user", "person", "human", "human object", etc.) We ended up frequently making a stick figure gesture mid-sentence to refer to them during our discussions. It was intended that as the physical performers were communicating outside of VR, there would be some indication inside VR that the virtual smart objects were talking to one another. A few visual options were implemented in the system: the objects could move (e.g. briefly "hopping" a small amount), they could glow, or they could transmit spheres between one another, like throwing a ball. Given the motion-based communications we were attempting with the dancers, I chose to use primarily the movement method to show the VR appliances communicating. This was implemented with a slight delay: if the smart chair was going to send a message to the smart TV, first the chair would move, then the TV would move, as if in response. I imagined this being perceived like someone waving or signaling, followed by the message recipient responding by waving back.

We investigated two methods for connecting communications in the physical and virtual worlds. In our first trials, we simply relied on an indirect puppetry approach. A student at a workstation (Skylar) watched the dancers, and when one started communicating to another, he would press an appropriate keyboard button to trigger the communication animation in the virtual world. For one of the later runs, Ben Schroeder (ACCAD alumnus / Google research), Jonathan Welch (ACCAD Design grad), and Isla Hansen (ACCAD Art faculty) all contributed solutions to enable the dancers to touch a wire to trigger a communication. While this had the advantage of allowing direct control for the performers of their virtual counterparts, the downside was, it placed limitations on their movement possibilities. Regardless, inside VR, the movement of the appliances did not read for our VR participant as communication: "Why is the refrigerator hopping?" Time during the brief session didn't allow for experimentation with the other communication animation approaches, but I suspect some of the other modes might have fared better.

Professor Wibbelsman led the group in discussion and we quickly discovered that our goal of eliciting new ideas about future possibilities for these emerging technologies seemed to be a success: everyone had a great deal of strong opinions about what might emerge and big questions about what they might be more or less comfortable with. One further practical consideration that emerged was the need for dancers to use a separate "narration" voice to communicate with the person in VR, to tell them things they needed to pretend were happening in VR as the improvisation ran its course (e.g. a refrigerator door opening and giving them access to ice cream.) Despite the pop-up providing an invaluable week of time for everyone to focus on prototyping projects such as these, one of the more surprising challenges was having access to people's time. Many of the details of the project were not the result of well considered design decisions but rather because that was what the person who popped-up to work for an hour or two could accomplish before jumping back out to a different project.

Humane Object Agency: Part One

Humane Object Agency: Part One

Collaborating faculty member Matthew Lewis writes:  I arrived at the humane technologies project and group later than most of the participants. I was invited to participate in the pop-up week which would focus on virtual reality this semester. I've been curious about using VR technologies for interface prototyping, and this seemed like a great opportunity. As with all pop-up participants, I was encouraged to consider either joining existing project groups, or to bring my own ideas to the table.

Not having been part of the earlier discussions, my unbiased ideas about "humane technologies" primarily involved evaluating people's interactions with the technology emerging around them in positive and negative ways. In particular, I've been reading almost daily newspaper articles about the "internet of things" (IoT). Usually these discussions center on debates between convenience vs privacy: e.g. your internet connected devices are controllable via your smartphone, but they also report your engagement to advertisers for marketing purposes. 

Discussions of the Internet of Things tend to predict that smart objects will be increasingly communicating in complex webs of systems which may or may not have our best interests in mind. In the same vein as it is often said that, "you are not the consumer but rather the product" for companies like Facebook, networked smart objects like your TV might be "free" to use as well, in exchange for you allowing an infrared camera to monitor your apartment and track your eyes as you watch TV.

With this content in mind at my first humane tech meeting, I heard Professor Michelle Wibbelsman (Spanish & Portuguese) mention two things that resonated for me: indigenous peoples' beliefs about objects having agency, and also "Object Oriented Ontologies" (OOO). I was curious about the idea that some cultures may have already thought a great deal about how to live surrounded with objects that have agency. Additionally, "Object Oriented Ontology" is a relatively recent perspective on metaphysics that’s attracted some attention from computer scientists working at the intersection of philosophy and human computer interaction. OOO involves a de-centering of humans that considers physical objects, ideas, their relationships, and agencies all as equally valid objects of philosophical consideration.

At this same initial meeting, Professor Hannah Kosstrin (Dance) mentioned that her motion analysis class's graduate students would be available to participate in projects during the pop-up week. Years ago I was fascinated by a presentation I’d seen on "service prototyping" which used actors as participants for interactive system design. I proposed that Hannah's students could embody the roles of communicating IoT devices, exploring the possibility space of system agency. Many IoT species will converse primarily with other smart objects and networked systems, rather than interacting directly with people in their space. What might such devices be "talking" about? What could their awareness and motivations encompass in different future scenarios?  

Additionally, I envisioned another participant immersed in a VR apartment environment, experiencing representations of these devices communicating around them. For example, there might be an indication that a smart TV, smart refrigerator, and smart couch were all observing aspects of their environment, and "doing their job" whatever that might be. What would this be like to live in such a space? 

I suspected that by embodying this simulation/performance, it might lead to thought provoking discussion, helping us to contemplate aspects of such emerging technologies and trends in ways we might not have otherwise considered from mere thought experiments.  I also hoped we might gain insight into the humane technology aspects of IoT, beyond the current discussions of privacy vs. convenience. Last, I hoped to gain experience with the usefulness of VR for interaction design prototyping. In a followup post, I'll discuss the implementation and outcomes of the pop-up.

Popping In, Popping Out: Reflections on the Humane Technologies Pop-Up Week

Collaborating Faculty member Ben McCorkle writes: From the outset of the Humane Technologies: Livable Futures Pop-Up Collaboration, I wondered what my role would be in it. As a specialist in rhetoric whose interest lies in exploring how technologies have shaped our communication practices throughout history, I’ve been trained to explore these questions from a position that’s somewhat outside and above the immediate action. To an extent, I set out to maintain this stance, intending to watch from the sidelines as an impressive group of technologists, designers, and artists came together in the spirit of play, exploration, and creativity to question how contemporary technologies can be utilized to promote a more compassionate, socially engaged future. But as the week unfolded, I found myself caught up in the gravitational pull, eventually diving in and joining the fray. 

As part of the reporting team (Peter Chan, Michelle Wibbelsman, and myself), our goal was to observe and document the processes of collaboration as they unfolded throughout the ACCAD space: the brainstorming, the concept building, the rapid prototyping, the problem solving, the play-testing, the refining. Initially, I found myself focused on the technologies themselves, the instruments that facilitated these processes. The whole space was populated by a whole heap of impressive gee-whiz tech, from VR rigs and 3-D printers to interactive touch displays and projectors. Surrounded by this technological infrastructure, it’s tempting (and perhaps even understandable) to forget about the actants, the human agents, that use that infrastructure. I mentally checked myself and popped out of the activity to observe from a different perspective.   

I found myself watching how bodies circulated during the week: frenetic, chaotic, playful, eventually leading to patterns… leading to purpose. The open layout of the ACCAD studios facilitated this movement, where people working diligently on one project would be pulled into another for some quick feedback, then to another to help with a demo. Classes would move in and out of the space, students contributing to the tasks at hand. 

I popped back in. I played with data visualizations on a large touch screen, contributed family photos to help Maria build a patchwork landscape for the Fly Like a Bird heron flight simulator, offered feedback as Scott and his team developed his Digital + Physical Games project. I also worked with Alan as he developed his Method of Loci VR and multi-touch display environment (for this project, I contributed the idea of the classical/medieval rhetorical technique called the memory palace, a method of remembering parts of an oration by mentally placing key points in an imaginary building). This project explores the possibilities of externalizing our individual memories and experiences in a shared, interactive virtual space. I think of this project as a microcosm of what the entire week was about: connecting, creating spaces for empathy and unerstanding.  

I popped back out. As Peter, Michelle, and I talked about what we were observing as ideas took shape, as process yielded product, we leaned on metaphors, symbols, and imagery that reflected this dynamic: the double helix structure of DNA, Chinese ideographs depicting “tree” and “forest,” pictures of a copse of trees, an individual tree with serpentine root structure, imagery of tornados, and Robert Smithton’s earthworks sculpture The Spiral Jetty, among others. 

At the time, I wrote in response to Peter as he shared a collage of these images:

I’m struck by the resemblances evoked by these different image groupings: curvilinear, evoking a sense of motion/process, "natural." in the sense of conveying a visual identity for whatever it is that humane technologies want to become (despite what we might *want* them to become), these images collectively suggest a common ethos or spirit. 

Additionally, these visual metaphors all work together if we consider how systems and ecologies operate, and, more to the point, how we as subjects observe them in operation: from a certain distance, perhaps they appear orderly and unified, but zoom in and you might see frenetic noise or even chaos; zoom in even further, and you might realize there's actually an elegance (perhaps even design?) to that chaos... 

Popping back in. I’ve come to a realization that all of this spiraling imagery is not just a metaphor, but a way of mapping the week-long activity of the Pop-Up. In other words, this movement of bodies not only reflects on a symbolic level how ideas emerge, change, lead to creation, it is *literally* a key mechanism by which they are formed. Hands type and push buttons to change code, arms wave in the midst of gameplay, whole bodies undulate in the service of performing a dance routine. Witnessing firsthand (and even participating in) this whirlwind-in-a-snowglobe, I realize that this dynamic is at play when we scale up to consider culture at large. The problem is, we don’t always recognize that; perhaps the solution lies in deliberately attempting to bring about those moments of recognition more clearly and more often.