Visual Instrument – Documentation



The input decision for this project went through three different decisions. At first I intended to create the software to interact with an Arduino board, but was then asked if I could do it in a VR system. After developing for VR for about a week I was asked again to do Arduino; finally settling with using the Adafruit Playground Arduino board. I used the two buttons on the Playground to simulate key pressed (left button was assigned the “a” key and the right button was assigned the “d” key). Pressing A would emit a single glowing orb and send it in a random direction from the emitter’s origin point. Holding down the “a” button would cause the orbs to spawn at a rapid rate; each going a different direction. The “d” button was saved for the solo in the middle the performance when the second guitarist (Mark Tremonti) takes over for the second half; the first half being played by Myles Kennedy. The slide switch was programmed to enable and disable mouse cursor movement functionality. This allowed me to disable mouse input through the accelerometer when the rotation speed was at desirable rate, as well as the flashing of the audio spectrum.  When I was ready to accept mouse input to change rotation speeds or flash the audio spectrum some more, I flipped the switch to the other side and tilted the board in whatever way needed; changing the visuals on the screen. In the end the project required seven prefabs: four different colored orbs, one glowing grid that changes size dynamically and the audio spectrum blocks. Five scripts were written from scratch; each responsible for a specific function to help keep implementation clean and not use too many resources (grabbing instances from the scene hierarchy, object pooling, etc). These scripts included a camera spin control to control the camera spin speed as well as the spectrum flash value, an audio visualizer to control the scale of the spectrum pieces on the Y-axis to fit the amplitude of the various frequencies , a script to create the blocks and place them in the correct areas on scene start (amount of blocks depended on the bit rate of the audio), a glow grid script to control the dynamic growth of the grid that is enabled during the second half of the solo and finally a beat action script that would choose a random glowing orb from an array of orbs, choose a random transform to aim the emitter at, and finally, shooting the orb in the direction of the transform.




Net Art

Update #1: Concept Check-In (4/29/18)


Art is something that for years has been able to inspire people, make them think and in some cases relate to them on a level that would come to one as a major surprise. One of the greatest aspects of art, whether it be a small piece painted on a canvas, performance art, a photograph, video, or any other type, is that it has the ability to communicate a large number of messages all from within the one location in which it exists. Every day, the human mind is able to connect a large quantity of various pieces of information without the person having to even think about wanting to save that information for later. This could be almost anything the user experiences, whether it be something the person hears, sees, smells, feels or even tastes. Every second without even realizing it, our brains are taking in incredible masses of information, interpreting the information, and then finally storing it in our subconscious. This entire process of obtaining information and sending it to the brain for interpretation and storage occurs in less than a single second and is all executed successfully without us having to think about doing so or even realizing that the information we are constantly collecting is even in our presence. What makes this Net Art piece a work of art is its ability to cause the user to subconsciously store the words in a special sequence that no one else may store them as; whether it be the sequence of words in order or perhaps even a specific connection caused by a previous experience that the viewer remembers. The viewer’s mind will make these connections as the large dictionary of various words are being presented to them within a time frame less than a second; even though the single word is only visible on the user’s screen for a little less than two seconds. The user’s eyes will become aware that a new word has appeared on screen and save that word in the back of the user’s mind; in many cases causing the viewer to believe they have forgotten the word, that it is not that important to them, or in some cases not realized what the word in itself was. While the user may stick to believing this, their mind will still have that connection stored somewhere; floating around and waiting to be triggered or brought back up to their current thought by being experienced to a certain event or hearing a certain phrase. Sometime in the future while the viewer is simply going about their daily life, an event may occur and cause a slight flashback moment of viewing this piece, because their subconscious thought process will create a special connection to the series of words that they were subjected to while viewing the piece; even after a long period after seeing the piece. This “fortune cookie” effect is what makes this piece a work of art: it has the strong ability to make a surprising connection to the user’s future that has not yet even occurred.

Update #2: Site link (5/8/18)