smmm

Controlling 360 Environment Node.Js + Socket.io + Three.js by Sebastian Morales

Our Sense Me Move Me final project is a multifaceted performance. For part of it we will be projecting a 360 environment on the walls, ceiling and floor of the room. Perhaps inspired by VR, maybe as a critic to it, or in an effort to make it more inclusive, we are going to use a single projector on wheels. As we turn or tilt it the projection will react to reveal the proper side of the virtual world. 

Using the sensors inside an iphone you can accurately identify the orientation of the phone. If only there was a way to send all these numbers live to my laptop... Interesting fact, a couple of years back laptops (mac book pros) used to have similar features to protect/lock the hard drive in case the computer found itself falling, as hard drives were replaced with SSDs, this feature faded away.

Connecting phone to laptop
Before I continue I wanted to thank Or Fleisher for his help me set up the server properly. 

Now that I look back at it, it all seems quite straight forward but at the time it seemed daunting.

 

 

 

The entire code is also available on github

Not sure about this but I'll likely use it as reference in the future. First started creating a npm package.json file, and importing all the packages needed. 

 

After setting up all the pages and the server, you can now seamlessly control the the view of a 360 world by tilting, and rotating your smartphone. 

Finally, since we were using a projector and wanted to give an effect of shining a flashlight into a world, we added a alpha image of a spotlight, this would hide the edges of the projector. 

The 360 image is actually a composite of two image quickly merged together to create a more dramatic and surreal environment. 

https://www.foro3d.com/f111/background-360-grados-en-cycles-115111.html

http://www.themodernnomad.com/sossusvlei/

SMMM Kinect Alternative Skeletons by Sebastian Morales

Forward Kinematics

For the following exercise I wanted to experiment with idea of linking every joint of the body to another, in a linear fashion.   

Graphics by Ron Rockwell

The idea is perhaps inspired on concept of industrial robot arms where the position of the end effector is a combination of all the previous joints. The first joint having most effect on the final position and orientation.

Forward kinematics calculations consists of finding the end effector position and orientation by computing the joint parameters. We are most interested however in finding the joint parameters based on a desired end effector position and orientation. This is known as inverse kinematics (IK).

 

Another interesting video showing a similar concept of kinematics is the X125, in particular the series 2. 

Kinectron + p5js

The forward kinematics for this sketch are quite simple, based on Mimi's code, I simply did another function and passed all the joint values in my desired order (arms first, legs second, spine third and head last) to achieve the widest range of movement. 

To create the single line of joints I only had to simply push and pop the matrix once for the entire set of joints.

I realized that the order of joints is not as relevant in the program above, this because I am only translating position but not adding rotation depending on joint orientation. (Full code)

Study of Pathways Post-mortem by Sebastian Morales

It is that time of the project that rarely ever comes. Time to be critical of what worked, what didn't, and what surprised us. All in the hopes that next time will be much better. 

What pathways did you see?
The pathways observed can probably be divided into two main categories. There was a lot of back and forth motion, a lot of linear movement. This was particularly true of David as he moved around the room. Jade however, tended to move more about the same area, orbiting around in what could be consider circles or eights.  

Which ones did you predict and design for? Which were surprises?
Thinking back, we predicted a lot more of circular motion. But more important, we predicted a lot more collaboration among the users. We expected physical contact between them, at the end, they didn't even touched once. We predicted a lot more of pushing and pulling, perhaps some rolling on the ground and a lot of expanding and contracting, both in a personal but also in a collaborative way. 

 

What design choices did you make to influence the pathways people would take?
It is hard to say if there was a decision that influenced more than the rest but there were a couple that had a lot of weight.  Moving the kinect from the ceiling to the wall in front of the performers had an immediate effect on how they would move. It literally shifted gravity, the range of possible movements. In retrospect, perhaps not really a conscient design choice, showing the performers on the screen in front of them really affected the way they moved.  They seemed to be more interested on how the technology was capturing the movement than the movement itself.  

Thinking about design choices it is relevant to talk about the code, even if it did not turn out as expected. The idea was to make a polygon by joining different body joints of the two performers. By showing previous polygons, the performers could see the history of their movement. This is important because it makes them aware of how their motion is not limited to space but extends through time. The visuals are a consequence of the movement but in turn these inform future possible motion.

What choices were not made? left to chance?
We only designed the interactions involved with one or two people, so the third person's joints would not be shown on the display. And the joints selected to form lines were only left shoulder, left wrist, left hip, left foot, since we thought people might move these joints a lot. However, when the users started, they waved the hands and walked around to discover the space, with little focus on the shapes they formed.

What did people feel interacting with your piece? How big was the difference between what you intended and what actually happened?-Jade

We intended to project the screen to the wall which faces to the users, but due to the equipment locations, we could only project it on the floor. In this way, they firstly expected to see some visuals shown  on the floor, but it seemed hard to understand the connections between user behavior and the projection because the visuals projected were reversed. We didn't expect people to pay attention to the floor, but instead, we hoped they could watch the visual changes on the two computers. It might have affect how long people may understand the interactions.

After we suggested them see the computers, people could soon get the idea. But one of our programs with floating curves can only catch one user's joints and thus couldn't show an enclosed shape, while the other one showed a changing hexagon. We also intended that people held their hands together, and touched each other's foot, but people tended to stay away with each other. And the shapes they formed became much wider.

Provide BEFORE and AFTER diagrams of your piece:

Performers on the floor, connected by foot-hand action

Performers on the ground, connected by hand-hand foot-foot actions

After:

Performers detached, walking and moving in very independent ways.

Alternative motions considered:

Code:

https://alpha.editor.p5js.org/sebmorales/sketches/rypE_wAdl

https://alpha.editor.p5js.org/Jade/sketches/BkfE2U1Yx

 

Important Acknowledgments:
Professor Mimi Yin  
Tiriree Kananuruk for the documentation
Lisa Jamhoury for the development of Kinectron
Class of Sense Me Move Me