For my phenakistoscope project, I started by making a basic layout in Photoshop. This included the proper dimensions of the circle, the location and angles of various objects such as the viewing slots and other objects and lastly basic color. I then imported the image into Illustrator and traced the edges of every object to make a raster outline that the laser cutter could understand. Using color mapping on the cutter, I was able to completely cut through the material to cut out the outer edge and viewing slots. But for the items such as eyes, eyebrow, ground and ball, I assigned a different color to cut a different power level so that it would only burn the top layer. I then took the final cut and traced over all the etched parts with a fine sharpie pen. Next, I took a thicker sharpie and filled in the iris of the eyes and eyebrows. I then took water colors and filled the eye with red and ball with blue. For the eyes I mixed water on the color to give it a smooth texture on every other eye. When viewed at a certain speed, this gives the eyes another “animated” aspect. I then scanned the phenakistoscope into my computer and animated into the gif shown above.
The principles of animation I used for this piece were squash and stretch (the ball bouncing on the ground), anticipation (the ball squashing on the ground; ready to spring up again) and timing (fitting the bouncing action of the ball into the 12 “frames” on the disc but maintaining a realistic time/space look)
Overall I am very happy with the way my phenakistoscope turned out. I had thought about adding some various background colors but decided to go without. I would defiantly love to create another one of these, but on my own time so I can be more flexible with my deadline and get all of the details I want.
For this piece I started by aquiring a high res image of the USA at a satellites point of view off of Google Images. The original image was much larger than I needed ( > 3000px for width) so I scaled it down to about 1000px width in Photoshop. I then exported the image as a raw format (TIFF) to be edited in Hex Fiend. In Hex Fiend I opened the raw image and saved a copy to a separate folder for editing; incase I messed up the header and overwrote the original. I opened the preview of the file in a seperate window on an external monitor so as I made changes I could save the file and view what it had done (also allowed me to see if I messed the headers up). When I was happy with the results of the first image, I imported that edit into Audacity. I applied the Invert, Echo and Change Pitch effects to various portions of the raw data and exported it yet again as a TIFF. For the third image I imported the RAW file from Audacity and yet again edited the hex information in Hex Fiend. I imported all of these images into the original Photoshop document I had created and began messing with the frame durations and layer sections. When I was happy with what I had, I added some Tween transitions for a fade effects to give a bit more “life” to the final product. I then exported the GIF and now am uploading it to my WordPress for all to view. The rule of having 575 implemented in the project is given by having a black frame appear on the fifth frame, seven ahead of the fifth (twelve) and then five ahead of that (seventeenth frame).
Added shading closer to the edges of the chair to make it appear less flat and have a sense of belonging in the scene
When designing my chair I wanted to go for an offsetting feeling. To get this feeling on paper, I placed a padlock under my canvas before drawing. This way, when I draw the chair straight up and down on the crooked canvas, the chair will be slightly tilted to the side when the canvas is placed elsewhere. This can cause a an off setting feeling to the viewer. For elements and principles of of design, I used a great deal of value to get the shading to look more realistic than flat. The use of of various values of black and gray create a sense of depth in the cloth on the chair, rather than leaving it look flat. Another element I used for the majority of this project is line. The entire image besides the shading of the darker stripes use lines. This includes the edges of the stripes, the outline of the chair as well as some of the folds and wrinkles in the cloth. For atmospheric perspective, I slightly erased anything on the top region of the chair. This gives the chair an even further sense of depth; the lines and chair back rest appearing to disappear into the not so far distance.
Brian Murphy’s “The Ability to Name Things Has Escaped Me” gallery opened on the 27th of October in the Llewellyn Gallery. It opened up later on in the evening on Thursday and received quite a crowd of people. After only being open for a few minutes, the gallery was absolutely packed; people bumping into each other, trying to get a look at the incredible 3D artwork being presented around the room. Before you knew it, the room was tightly packed with people walking around with anaglyph 3D glasses on; their initial reactions to the work visible on their faces.
The topic of the exhibit was based around the experimentation of expressing various viewpoints about the world through the use of 3D video methods; in this case using anaglyph video methods. The exhibit was presented by the artist himself, Brian Murphy. Some relevant information he gave about the pieces is that only 100% done by him; the filming, processing, application of effects, and final output. This is important because not only does it give credit where it credit is due for the video itself, but also lets us know what Brian’s filming styles are, which could come in handy for any projects he releases in the future. Another important thing that was brought up (based off a question I asked myself) was why he had chosen to go with anaglyph 3D video techniques. Brian had brought up the fact that he chose to go with anaglyph 3D was not only because it was something he hadn’t really worked with, but also because he was so limited on available for hardware that would allow for different 3D video methods. This makes you think and wonder what amazing work he could have possibly created if he was given access to different 3D video methods.
The presentation was merely about experimentation. Brian did not bring up anything about a life lesson or a way in which there was symbolism in his work. He merely did the projects he did to practice different methods. It started with him getting a camera and wondering “What could I do with these stills?” He then proceeded to experiment with various 3D methods and found anaglyph the most intriguing. He also used anaglyph due to hardware limitations. One piece of info he brought up that caught my attention was that he was able to mix his actual DNA into a piece that he had on display.
One way I can apply the content of the event to my work in DMA and this course specifically is by experimenting with anaglyph 3D, but rather than just converting the video to 3D, apply some glitch art techniques to it. Doing could cause some extremely incredible results. Highly disorienting? Yes, perhaps. But in the end it would be interesting to experiment. If I was able to get my hands on a device used for one of the piece (caused randomized fluxuations in video signal), I could try to apply the classic anaglyph 3D visuals to the video feed and see what type of results it gave. A really interesting experiment I could mix with DMA and this course would be digitizing my DNA like Brian did but then messing with the code output, then running it through a program to generate visuals based on the information.
By using high contrast points on features such as the eyes and mouth, I am able to create a relatively intense psychological presence to the image. The stretching of the mouth and widening of the eyes adds to this dark effect. We see faces everyday of people we do and do not know; we know very well what they look like. By disrupting the features of what for us is “the norm” of facial structure, an uneasy sense is created in the viewer.