For my personal project, I decided to do a PSA on global warming and the effect of people’s actions on the environment in general. The piece consists of a compilation of various clips from around the world of envrionmentsbeing destroyed, the effects of said destruction, audio of people speaking about the situation and a moving soundtrack to tie it all together. The cuts back and forth from destruction by nature and humans causing damage to the environment helps create a “cause and effect” connection.
Chosen theme: Sureality.
Focus: In various shots throughout the video, focus is pulled to one side of the spectrum to the other. In the screenshot above, the character is exiting a foggy state and becoming aware of his surroundings. To allow the viewers to feel the same, the focus is out on both the character as well as hit surroundings. This quickly changes as the camera pulls away.
Color Seperation: For parts of the sequence in which the character is further into his trip, I used color seperatiotion to enhance the “new state” perspective. I could have used a fisheye lens to create a trippy effect, but changing the color palette of the world was a much stronger effect. This plus the inclusion of rough edges creates a new world type visual for the segment.
Camera movement: In this shot, the camera pans left to right to transition the viewer into a new region. By measuring the disatance from both walls while filming and keeping the camera level, I was able to create a seamless pan that appears to occur in one shot/settings, but was done infact in two. The feathering increases the effect by making a nice blend to reduce the chances of a shake being detected by the user.
For my Juice advertisement, I wanted to keep every single aspect of the design simple. From the final layout, the environment and even the can’s label, I did not want anything to be too flashy. I started out by creating a CV curve to shape the can to my liking. Once I had the CV curve converted to polygons, I set up two materials on the can geometry; one for the label and the other for the exposed aluminum on the bottom and top. The label design went through 16 different iterations. The first 5 were more complex in design but made the can appear flat and appear to not have depth. I then found a way to create depth against the solid color background by using shadows with the text and also by setting up an HDRi light environment in the render. After coming up with the slogan, I thought a bit about how I wanted to convey the slogan in a visual way; later coming up with the vine design. After tweaking the locations of the vines and finding a good resting place for the end, I then modeled the oranges resting next to the can as well as the small ones on the vine. Yes, oranges grow on trees, but please just go with it. In the end I set my samples count to a high number resulting in a 14 minute render time and then exported the EXR file. After some tweaking in Photoshop, this is the result I ended up with.
For this project we were instructed to composite a 3D render and a image photographed by ourselves (no downloading images online) into a single image. For the first two weeks I worked on modelling, texturing and lighting my scene so I could have a better idea of how to “interact” with the environment when shooting the still images on a green screen studio. Once the 3D scene was set, I started making a few EXR renders to make sure I had all the AOVs needed to control the assets of the render thouroughly enough to ensure I would not have to keep going back and rendering. I then went to the green screen studio and shot some poses using Automatic Exposure Bracketing so I could make an HDR image of myself. This would ensure I had enough light data for the shot so I could increase or decrease light in my shot without overexposing regions or not having enough color information all together. Over the next three days I ended up repositioning the 3D camera to match the angle, position and focal length of the camera I took the shot on the green screen with. Doing so was crucial to ensure that the perspectives matched and helped make the image appear as one whole composition that wasn’t poorly photoshoped by someone who downloaded a cracked version of Photoshop to make memes. On day three after getting some input from my instructor I was able to get a decent amount of detail on myself while still matching the lighting conditions of the 3D render. In the end I was able to have a final composition made from two shots that were modified to fit each others moods, color tone, perspective and overall feeling.
The input decision for this project went through three different decisions. At first I intended to create the software to interact with an Arduino board, but was then asked if I could do it in a VR system. After developing for VR for about a week I was asked again to do Arduino; finally settling with using the Adafruit Playground Arduino board. I used the two buttons on the Playground to simulate key pressed (left button was assigned the “a” key and the right button was assigned the “d” key). Pressing A would emit a single glowing orb and send it in a random direction from the emitter’s origin point. Holding down the “a” button would cause the orbs to spawn at a rapid rate; each going a different direction. The “d” button was saved for the solo in the middle the performance when the second guitarist (Mark Tremonti) takes over for the second half; the first half being played by Myles Kennedy. The slide switch was programmed to enable and disable mouse cursor movement functionality. This allowed me to disable mouse input through the accelerometer when the rotation speed was at desirable rate, as well as the flashing of the audio spectrum. When I was ready to accept mouse input to change rotation speeds or flash the audio spectrum some more, I flipped the switch to the other side and tilted the board in whatever way needed; changing the visuals on the screen. In the end the project required seven prefabs: four different colored orbs, one glowing grid that changes size dynamically and the audio spectrum blocks. Five scripts were written from scratch; each responsible for a specific function to help keep implementation clean and not use too many resources (grabbing instances from the scene hierarchy, object pooling, etc). These scripts included a camera spin control to control the camera spin speed as well as the spectrum flash value, an audio visualizer to control the scale of the spectrum pieces on the Y-axis to fit the amplitude of the various frequencies , a script to create the blocks and place them in the correct areas on scene start (amount of blocks depended on the bit rate of the audio), a glow grid script to control the dynamic growth of the grid that is enabled during the second half of the solo and finally a beat action script that would choose a random glowing orb from an array of orbs, choose a random transform to aim the emitter at, and finally, shooting the orb in the direction of the transform.
For game jam 2018, I was one of the three artists on the team. Some of my other work involved programming and asset management but my main task was to create the models in the game. Since we were going for low poly and flat shaders, not much attention to detail was needed for the models; importance on capturing the overall form of the object. One of the first assets I was in charge of was the homeless man sitting on a mangled box. Although the model would be viewed from afar, I decided to take the time to model, sculpt and texture some detail into the face. This would also come in handy if the programmers decided to create an inventory system in which the model would be viewed up close. After finishing the homeless man, my next goal was modelling a shoe. This did not take long as the basic shape was a rectangular prism with an extrusion on one end and then beveling the edges. After about 20 minutes, I had a completed shoe model for the programmers to throw into the game. One of my last models was a tide pod. I had some difficulty at first modeling this item for when I started the first time, the topology of one swirl didn’t match and bridge over to the next. I then decided to start from scratch with a flattened cube and subdividing it like crazy and then sculpting the detail in. Once the detail was added, I reduced the poly-count to match the low poly look of the other assets. Once all the models were completed, I cleaned up some of the UV maps since they all shared the same texture and shader (we used a texture atlas for all assets in the game). One last minute piece I put together for the game was a main menu for the game; beforehand the application just loaded into the game with no credits or introduction. I was in charge of animating the models and camera, adding post-processing effects, programming the UI functions and designing the credits scene transition. This menu can be seen in the video shared below: