3D Composite


For this project we were instructed to composite a 3D render and a image photographed by ourselves (no downloading images online) into a single image. For the first two weeks I worked on modelling, texturing and lighting my scene so I could have a better idea of how to “interact” with the environment when shooting the still images on a green screen studio. Once the 3D scene was set, I started making a few EXR renders to make sure I had all the AOVs needed to control the assets of the render thouroughly enough to ensure I would not have to keep going back and rendering. I then went to the green screen studio and shot some poses using Automatic Exposure Bracketing so I could make an HDR image of myself. This would ensure I had enough light data for the shot so I could increase or decrease light in my shot without overexposing regions or not having enough color information all together. Over the next three days I ended up repositioning the 3D camera to match the angle, position and focal length of the camera I took the shot on the green screen with. Doing so was crucial to ensure that the perspectives matched and helped make the image appear as one whole composition that wasn’t poorly photoshoped by someone who downloaded a cracked version of Photoshop to make memes. On day three after getting some input from my instructor I was able to get a decent amount of detail on myself while still matching the lighting conditions of the 3D render. In the end I was able to have a final composition made from two shots that were modified to fit each others moods, color tone, perspective and overall feeling.


Visual Instrument – Documentation



The input decision for this project went through three different decisions. At first I intended to create the software to interact with an Arduino board, but was then asked if I could do it in a VR system. After developing for VR for about a week I was asked again to do Arduino; finally settling with using the Adafruit Playground Arduino board. I used the two buttons on the Playground to simulate key pressed (left button was assigned the “a” key and the right button was assigned the “d” key). Pressing A would emit a single glowing orb and send it in a random direction from the emitter’s origin point. Holding down the “a” button would cause the orbs to spawn at a rapid rate; each going a different direction. The “d” button was saved for the solo in the middle the performance when the second guitarist (Mark Tremonti) takes over for the second half; the first half being played by Myles Kennedy. The slide switch was programmed to enable and disable mouse cursor movement functionality. This allowed me to disable mouse input through the accelerometer when the rotation speed was at desirable rate, as well as the flashing of the audio spectrum.  When I was ready to accept mouse input to change rotation speeds or flash the audio spectrum some more, I flipped the switch to the other side and tilted the board in whatever way needed; changing the visuals on the screen. In the end the project required seven prefabs: four different colored orbs, one glowing grid that changes size dynamically and the audio spectrum blocks. Five scripts were written from scratch; each responsible for a specific function to help keep implementation clean and not use too many resources (grabbing instances from the scene hierarchy, object pooling, etc). These scripts included a camera spin control to control the camera spin speed as well as the spectrum flash value, an audio visualizer to control the scale of the spectrum pieces on the Y-axis to fit the amplitude of the various frequencies , a script to create the blocks and place them in the correct areas on scene start (amount of blocks depended on the bit rate of the audio), a glow grid script to control the dynamic growth of the grid that is enabled during the second half of the solo and finally a beat action script that would choose a random glowing orb from an array of orbs, choose a random transform to aim the emitter at, and finally, shooting the orb in the direction of the transform.



Insect Interactive



One of the first items my partner and I discussed the design of once we had an application idea in mind was what we wanted the controller for the exhibit to look. Since we are going for a bathroom floor look in this top-down view application, we figured it would be best to try to “connect” the controller to the digital world by having the bathroom floor texture in the application match that of the bathroom floor texture on the controller. This will allow the user to have a better connection with the application and bridge a gap between what is real (the controller) and what is fake (the applications visuals). Since cockroaches are known for causing large scale infestations, we decided to make family one of the main recurring themes in the application. This could be anywhere from the home screen, pause menu and of course the application itself. At the home screen, the user will be greeted by a line of historical family portraits which contain the cockroaches family. The main menu will be extremely basic and allows the player to continue to the application with the press of any of the buttons present. As for the main application, the player will be introduced to three cockroaches; each with their own demonstrations. Two will be full grown adults and one will be a baby cockroach (to display the fact about babies being as small as a speck of dust).


If we have extra time with application development, we plan on having a pause screen in the application, which will feature a cockroach moving around to prevent screen burn when the application is in this state. At this time the player can cycle through the two options using the two buttons on the panel and select their desired option with the center button. The image on the right shows a sketch of the loading screen for the application. Again, this is a small application that shouldn’t take too long to load, but if we have some time left for production it would be a nice touch. As the application progresses its load status in the background, the cockroach on the floor makes his way across the hall. Once he reaches the hole back to his home, the main level will load.



For the final beta I was in charge of programming all of the cockroach AI, setting up the audio clips to play on a specific button press, polishing animations, setting up sprites, refining the floor texture, recording the facts audio, designing the floor that the user interacts with, programming the Arduino board to allow for interaction and last but not least, testing the application after each change made to ensure everything worked out without any bugs or compilation errors.

The cockroach AI consists of a series of if statements within a single script that is applied to all of the cockroaches. The system is designed to detect a certain button press and act accordingly. For example if the A key is pressed on the keyboard, only the RadioactiveRoach gameobject will play its fact animation; leaving the others to continue playing their idle animations uninterrupted. The cockroach AI script also contains a group of methods that are called from a script on another object; the FactManager. The fact manager contains a separate script that is in charge of holding an array of audio clips to set and play on a single audio source depending on what key is pressed and also to detect if a fact is currently playing. Checking if a fact audio clip is already playing was cruicial to the design of the application as it prevents multiple instances of the same audio clip (as well as others if a different button were to be pressed) from overlaying; creating a loud and disturbing mix of audio files playing simultaneously. Depending on what fact was already in the process of playing (both the audio clip and associated animation for the roach), a boolean switch would be enabled to allow the clip to play afterwards BUT only if the other buttons were not spammed. If any button is clicked more than once a fail safe is enabled to break the loop. This prevents the facts playing one after the other in the order they were selected for +5 minutes with no way to stop them except for restarting the application. After debugging and rewriting for about a total time of 4-7 hours, the system was finally complete. Then it was on to programming the Arduino board.

This step was relatively simple as I just had to use some of the built in Arduino frameworks and write “if” statements to tell them when to fire. The script contains a variable with a value set to determine how sensitive the pads will be to voltage change in the wires. I had originally set it to 50 but found it to be too high; later setting it to 25. Within the loop method is an “if” statement to determine if the switch on the board is in the left or right side. If it is on the left side, pad inputs will be sent to the computer to simulate key presses. If it is to the right, then keyboard presses will not be sent. This was an important feature to include to make sure that no keyboard requests would be sent when trying to debug scripts or troubleshoot wiring. Inside the switch’s if statement are three more if statements; one for each keyboard input. I simply told the program “if this pad is touched and has a voltage input greater than the variable I created above, send a keyboard input of the letter ‘a’ to the computer”. I did this for the other two but simply changed the pad numbers to listen to as well as the key being sent to the computer. At the very bottom outside of all “if” statements but still contained in the loop is a delay set to 250 milliseconds. This helps to save resources on the Arduino and keep the key input from being spammed. If not for this input, the person could keep touching the wire and for each millisecond they did, the letter would be sent/and or printed through the keyboard input. I found 250ms to be a good delay to reduce this problem as without making the machine wait too long. Once I had all inputs completed, I recorded some of the narrative bits to test the input; success. The final step was to create the bathroom floor that the user’s would be interacting with.

The floor consists of two layers: the top layer consisting of four tile boards and another layer on the bottom made of one sheet of carpet. Three holes were cut through the carpet in the; about a quarter of a foot away from the left edge and same for the right, as well as one in the center. Insulated wire was fed through these holes from the bottom of the carpet and up between the cracks of the multiple tile boards. Using the carpet layer helped to lower the amount of uneven tiles caused by the wire laying across them in odd positions; creating a cushion for the three wires. The wires were then cut to length from the spool of intercom wire and had their sheaths removed on each end. The two copper ends were then twisted together to increase durability and make a more secure connection. After finding an ideal length to protrude the wire through the carpet and up through the tile from, I applied super glue to the carpet fibers directly onto the hole. This would help keep the wires from being pulled back and forth or even ripped out of the sheet entirely. Holes were then cut into the thorax of some rubber cockroaches and the wire ends were fed through to the top; then bent down to maintain stability of the roach and keep the wire seated. The other ends were then connected the the correct pads on the Arduino, and all assets were testing. In the end, we had a product that allowed for interaction and learning by touching one of the most well known hated insects in the world.

Documentation (12/1/18):

Featured below is a video composition of users interacting with our exhibit at the Rochester Museum of Science.

Game Jam 2018 Work

For game jam 2018, I was one of the three artists on the team. Some of my other work involved programming and asset management but my main task was to create the models in the game. Since we were going for low poly and flat shaders, not much attention to detail was needed for the models; importance on capturing the overall form of the object. One of the first assets I was in charge of was the homeless man sitting on a mangled box. Although the model would be viewed from afar, I decided to take the time to model, sculpt and texture some detail into the face. This would also come in handy if the programmers decided to create an inventory system in which the model would be viewed up close. After finishing the homeless man, my next goal was modelling a shoe. This did not take long as the basic shape was a rectangular prism with an extrusion on one end and then beveling the edges. After about 20 minutes, I had a completed shoe model for the programmers to throw into the game. One of my last models was a tide pod. I had some difficulty at first modeling this item for when I started the first time, the topology of one swirl didn’t match and bridge over to the next. I then decided to start from scratch with a flattened cube and subdividing it like crazy and then sculpting the detail in. Once the detail was added, I reduced the poly-count to match the low poly look of the other assets. Once all the models were completed, I cleaned up some of the UV maps since they all shared the same texture and shader (we used a texture atlas for all assets in the game). One last minute piece I put together for the game was a main menu for the game; beforehand the application just loaded into the game with no credits or introduction. I was in charge of animating the models and camera, adding post-processing effects, programming the UI functions and designing the credits scene transition. This menu can be seen in the video shared below: