Evelyn Fitzgerald


Working in "Moments"

I've decided to work in "moments" of the monologue and have divided the piece into 3 major moments. In each section there is a different set of interactions with the system and responses based on the performers movement in space. I am working with a director tomorrow to see how it feels to work with a digital tool for performance. I will record my notes and the output of our work. 

Here is a little video while playing around with the first moment which is about inflating and deflating balloons:



Here is the passage from the monologue:


We work in a big airplane hanger. This girl and I are budding experts in the technology.

We work with these machines. They are like 3D printers, but they print balloons that fit together. She’s been at it longer than I have so she explains how my next piece was printed backwards and that I’ll have to flip it inside out to make it work.

For a few minutes we giggle and play with the flip, in-vagina, out-penis, in-vagina, out-penis. One of us earns a badge. I pick it up. She says that was hers. I give it graciously though I honestly think it was mine. It doesn’t matter. We are having fun.


In This Dream - Developing an AR Performance Framework

Prototyping and Changing Directions

In the last two weeks, I have taken some strange turns in the development of my digital mirror for my Thesis Project. In order to make sure I delivered a finished product for the course, I have decided to focus on the parts of developing for the Kinect that I have mastered the most at this point. Thankfully, I am re-using a lot of assets and code for my prototypes for my Thesis.

So what is this project?

This project is the first complete piece of a performance series called "In this Dream".

In these performances I will use the Microsoft Kinect to create augmented experiences for performers who are acting out dreams that I have had about gender dysphoria and transitioning. The augments are responsive to the performer's body positions and relate to major themes and symbols throughout each piece. The goal is to create a living, performance environment for performers to play with while working.

For this first one, I will be both the performer and programmer.

Here is a link to the text I will be using.

So what's it look like?

The augments will principally affect the performer and change based on qualities of their body (see short demo video).


The principle augment is a skeleton that lays over the performer and reflects symbols that recur through out the piece. 

Here are some effects that I have been playing with. The black background is to make the augment easier to see, in the actual performance you will see (at times) both performer and augment.
 

 "The Basic" | Symbol - Myself



"Filling Space" | Symbol - Myself(working)


 "State as Head" | Symbol - Predicted State of Performer

 "State as Saturation"| Symbol - Predicted State of Performer



"Manipulators"| Symbol - Others making choices for me 

Here is a little view under the hood as well of how tracking works and what variables I am tracking: 

So what? (What goes in my paper...)

I am creating the rules for these augments using a performance framework called Viewpoints which defines key variables in a live performance using the acronym SSTEMS. Here are all these elements (along with the possible correspondence to the library I am building). 

  • Space (canvas and room data in Kinect)
  • Shape (body data in Kinect and relationship to Canvas data)
  • Time (system time)
  • Emotion (system guesses about state using posture and face data)
  • Movement (changes in body data)
  • Story (logic rules behind the system)

 I hope that adapting this framework will assist in creating these short pieces about dreams, and possibly assist with staging and rehearsing performances with augments.

Future Planning

- make body position values relative to avoid issues with depth
- fix depth data and decide whether or not to use it
- program symbol-values interactions
- work with director and perform (December 11th)

Keep in mind (a little note to self)

-scope creep (keep your graphics simple and fun to play with)
- focus on the interactions more than the quality of your piece
-Don't make a narrative in the code, the performer uses the code to discover relationships in the code, not perform them, performer should be using their regular tools and tricks

 

Posture Feedback and Final Plans

This week I completed the posture feedback prototype for my project. I have made a short demo of myself using and linked it below:





 Currently it tells the user how aligned their head, mid-spine, and lower spine are and whether or not their shoulders are aligned. There is a large red error bar that grows on parts of the spine that are misaligned to help users straighten their spine. The line between the user's left and right shoulders also changes from red to green depending on the angle made, guiding the user to line up the shoulders.

I also returned to Processing and using CLM tracker for facial tracking. As I understand more how this works, it has been easier to work with the mask portion of my project. I am still working on letting a user alter their own face. I am attempting to figure out how to replace the users face with an image of their face and then allow them to play with it. 

Final Steps
- Finalize the mask portion, either make it the user's face or allow the user to customize their face
- Script and practice the  "experience" that a user has [as I have not designed all of my UI/UX elements, I will have to guide participants through some elements]
- Get volunteers to try the experience
-Gather physical materials needed
    - portable windows PC for the Kinect
    - curtain to make small booth
    - monitor to display
-Make sure the experience can run without me there

These final steps are not only critical to completing my paper for this course as they will help me explain how folks reacted to my experience. They are also critical in continuing to refine this project and begin to evaluate its effectiveness.

 

Playable Prototype and Masks,Masks,Mask

 


I made some masks and a mid fidelity prototype. I can now demo the mask making, vocal feedback, and posture feedback. I brought the mask making and vocal feedback demo to class.

I need to work on weaving the three experiences together, the process of users making their own masks, and UI for all three compoments.

I also want to have some demo examples for finding a centered stance before the posture feedback portion. 

 

Playing with Graphics in Processing

As my project is going to be in three pieces, I decided to do more practice in Processing with face tracking and find some key points.


 

Connecting Kinect and Uniting Unity

I spent time this week learning more about the Kinect SDK for unity. I tried to learn more about some of the tools I would be using: replacing background, rigging a 3D model (haven't been able to do this yet), and  controlling with gesture.

I am currently focusing on rigging my own 3D model but had a lot of struggle. You can see my fun  excitement as I play with these tools.





If anything this motivated me to learn more about the underlying scripts and try to build on the examples and making them interesting. I also don't like working with something I didn't build from scratch.
 

Cheek to Cheek



Demo Features





Notes on Process

This week I have been working on making masks (eventually these will be full body "masks") using the Kinect. In order to build my technical skills working with the Kinect SDK, I played around in processing with a library made by Thomas Lengeling. Thanks to Thomas' cool examples, I had a base for building out some fun stuff. I am happy to have learned a little about:
  1. Gesture based controls
  2. Using data from the skeletal model for running a program
  3. Using data from the facial tracking system to modify the points generate
I still need to learn more about:
  1. Rigging a 3D model to the Kinect skeletal model
  2. Drawing a face on top of the points in the face Array (can't figure out what order these are in, or how to find things like "top left corner").
  3. Using the depth mask to change backgrounds and focus on the human figure
I am considering switching to Unity+Kinect and basing the program on some example files in the Unity story. I am more confident, however, in my ability to do most of the modules (voice feedback and skeletal feedback) in Processing as I am more comfortable doing the scripting in processing. Making all these modules into libraries and classes makes a lot more sense to me than working in Unity.


Clothing is Hard. Clothing is Really Hard.

So I have begun my journey into using the Microsoft Kinect as a solution for developing my thesis. I am currently evaluating other solutions to allowing for experimentation with identity that use the Kinect and playing around with various libraries might help me make my own solution, the open Processing libraries I have found seem the most accessible to me but do not afford as much access to 3D modeling as say linking the Kinect to Unity might have, this exploration is for a later post. 

This post is really about how difficult it is to model clothing on someone! I though this might be a component of my solution, but I think I might have to focus only on the face. I checked out an app for Kinect called Fitnect that charges a LOT of money for a virtual dressing room. 

In short, it does not look good. It does not provide the immersion that I would expect/need in my application especially reflecting on the first time I wore the clothing I wanted to wear or recalling the stories that others have told me. Perhaps clothing is a component that will have to wait. Pictures pending of my foibles with Fitnect. 

Assignment 4: Digital Mask Work - The Modern Greek Theater


In my life as an acting instructor, I often deferred to mask work to teach about body and character work. The mask lets the performer forget about moving their facial muscles. It reminds us that we exist in our bodies not just our faces. It also gives us a new identity. It offers to us a world in which we had a different face.



Master's Thesis Project - Mirror/Mirror




    When we stand in front of a mirror we often think that we are seeing our “true” self, but what we see is not always the truest version of things. Consider two visitors in the valley of half-purple, half-green sheep. One insists the valley is full of purple sheep while the other calls him a madman and a cad having only seen green sheep herself. It would take a deeper investigation for these two to see the Truth of this situation. Both are correct, neither is right, and only through further exploration will the two ever understand what the other meant.
    The case is much simpler than the complexity of looking into a mirror and trying to see ourselves. In the most basic of terms we likely see one of three things: a self we desire to be, a self that we perceive we actually are, and a self that we think that others see. Like the two visitors to the valley, we are correct about all of these selves, but none of these are who we really are. The goal of any human is to find some alignment between these selves.
    This problem is compounded by someone who faces gender dysphoria, a marked mismatch between their self-concept of their gender (or sex) and the gender (or sex) that they were born with or others perceive.
    In addressing this mismatch trans* folks face many challenges, one principal challenge is resolving the mismatch and building the confidence to behave and react in different ways. My project is to create a digital mirror that allows trans* folks to alter their appearance and then allows them to play games that will build their confidence in speaking and walking. This will also be couched in a narrative experience to give the user a sense of wonder and appeal to the human love of story. Over the next week, I will be reviewing similar projects and posting my thoughts as I begin to design my Mirror. Keep on the lookout!


Assignment 3: Failures to Make Cthulu Rise - A Reflection on Love



A while ago I had this image in my head while looking at the San Francisco Bay. I had just had a morning of whispering poetry, laughing at secret language jokes, listening to Dvorak, and the general set of things one might associate with a weekend with a partner. I was overwhelmed with love but also this other strange looming feeling. I hadn’t come out to her. I knew I would have to tell her. That the secret would end us. Standing on her porch I had this vision of a giant sea monster erupting from the Bay and turning our whole world upside down. I tried to express it to her, how it felt, how it would look, how tiny the houses dotted along the ridge would become. I even wrote this poem:


When I was out on your porch,


   And I said that I’d like to see a monster rise from the depths of the ocean,
   
I meant, that


Placidity,
the awareness that things could be jumbled,
conturbare - to shake up


and


Turbulence,
the self deception that motion
is not unmotion


are the same thing.


We build great theaters from two mirror brothers,
motion, the illusion of motion, and
the still body canvas on which we paint our imaginations.


What I mean is,


I wish that words would burst from me like they burst from the world,
I wish that sense and goodness would be born of me like it is born of the world,
I know that Love is that which moves and doesn’t move. I know that love is a monster in the depths of the ocean.


Love is a monster in the depths of the ocean,
is a spaceship full of promises and steel deep in the heart of space,
the imagination run wild and rampant on the promises
of the physical.


I don’t think she got it. I don’t think that I got it.


It was this urge to show someone else a world deep inside of myself. An urge to convince the rest of the world that the fantastic could exist alongside the mundane. Love didn’t mean what we shared, it meant what the world shared. The interplay between what is and what could be.


And so I was super excited about this project. I was going to finally put my sea monster in the Bay. Or at least the Hudson. I would send it to her and say THIS IS WHAT I WANTED TO SHOW YOU.


So I started in my room:


This was going to be harder than I thought.


I got him here:

A little better. And finally facing kinda towards me:


I struggled to place him ANYWHERE else, fiddling with transform and rotate in the POI tables. Nothing seemed to respond the way I wanted it to. I did iterations in Maya turning and twisting. I built the full scene and tried to export it as a single object. This did not work.


I became increasingly frustrated.


Here was the perfect medium for my vision of the fantastic and it was stuck in my little apartment room with my comic book cat and giraffe pillows.


At this point, I feel defeated. I failed to even get the beast in the MAGNET court yard. I failed to get it to fully face me. For the life of me I cannot figure out what XYZ axes are in the PHP tables. They must refer to some invisible point that is not the center of my Cthulu sea monster.


It is a mystery.

I hope with some time and thought though I will solve this mystery. I hope that the augment won't disappear every time I move my phone. I hope that the Fantastic may be something I can create without falling to the madness of Lovecraft's god of insanity.



Assignment 2: False Dichotomies - Replacing Ads with Thoughts


For my project I decided to make a little power point slide presentation that would pop-up when you looked at the election ads from 7/11.




The augment is a little explanation of false dichotomies and some funny and serious examples that point out where false dichotomies fall apart.

I also made this little narrative video to display it because I got really attached to the idea of playing two characters. Enjoy!



Assignment 1 : Fact/Fiction - The Submarine Ride


The following is a reflective prose piece with images.

As you read this listen to this audio.

Memories are constructed and reconstructed every time they are recalled. The reconstruction becomes more valid than the original until we are left wondering if there ever was an original. 

When I was very young, I went on a real submarine ride. It looked like this: 

I remember how red the lava was flowing around the edges of rocks, the condensation through the windows, and the way everything seemed to glow incandescently.

.
Were the fish moving? Or were we moving? Was my mind moving?


I would bring this memory up with my mom a lot. I would tell her I remember my father holding my hand. I insisted I could recall his face and his face on the real submarine ride, that time she and he took us under the sea to see the wonder of nature.

She would smile and nod... oh yes...of course...your father was there
_________

And after enough smiles and nods, I start to remember when the submarine surfaced.



And the great seizing hand of memory emerges from the waterfall nearby



Taking the sub in hand and starting to crush the metal around me



The hand can't be real, the fish. Can't be real. The lava. Not real. The sub...

A very good copy.
________

This is Disneyland by moonlight. This is fact/fiction. Now the whole view expands into my mind.

The Matterhorn and tramway neraby.



Cinderella's Castle in the distance.


If these parts were not fact, what else was fiction? What else had I made real that was not there? How would I construct this memory the next time it passed through my mind? How do I construct it now?

No comments:

Post a Comment